Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread trev
On 02.08.2013 03:35, Gavin Sharp wrote:
 The experiment you're referring to was Adblock running in Firefox with
 remote tabs enabled, I think. I'm not up to date with how that
 experiment was progressing, but I think there are some fundamental
 differences between that scenario and the background content processes
 being used for the background thumbnailing service that might not make
 the two cases directly comparable.
 
 It would be valuable for an adblockplus developer to investigate, certainly.

Unless I missed something, this is about Adblock Plus supporting the original 
incarnation of Firefox Mobile, the one with two separate processes for chrome 
and content. This code is long gone in the current Adblock Plus versions - it 
was a real pain to support due to lots of unavoidable code duplication. The 
last version still having it is Adblock Plus 1.3.10.

The code in question was explicitly running in Firefox Mobile only. It used 
messageManager.loadFrameScript() API to inject code into the content process of 
new tabs - I doubt that it will work the same here, Adblock Plus would probably 
need to look explicitly for these browser remote=true elements (is there an 
event when they are created?).

Altogether, supporting this in Adblock Plus should be possible - but it will 
require significant amounts of additional code and introduce quite a bit of new 
complexity. I also have doubts whether this is work that should receive 
priority.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Marco Bonardo

On 02/08/2013 03:50, Nicholas Nethercote wrote:

On Thu, Aug 1, 2013 at 6:29 PM, Gavin Sharp ga...@gavinsharp.com wrote:


Do you have specific issues you're worried about, or are you just speaking
about issues in general?


This AdBlock issue worries me specifically.  And the fact that there's
breakage with our #1 add-on makes me worry in general.


Ads are particularly annoying when you are using a page, in a thumbnail 
that effect is quite mitigated or nonexistent. The only problematic 
thing seems to be the thumbnail appears different than the page you 
commonly see, so you may not recognize it visually.


-m
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Andreas Gal


We are working on ways to make add-ons like adblock work with e10s on 
desktop without major changes to the add-on. That mechanism might work 
for the thumbnail case. Gavin can reach out to trev and discuss whether 
this is something we should try to make work. I do agree this isn't 
super high priority right now though and we can live with this behavior 
for thumbnails. Using e10s for this is really cool and greatly improves 
responsiveness, after all.


Andreas

t...@adblockplus.org wrote:

On 02.08.2013 03:35, Gavin Sharp wrote:

The experiment you're referring to was Adblock running in Firefox with
remote tabs enabled, I think. I'm not up to date with how that
experiment was progressing, but I think there are some fundamental
differences between that scenario and the background content processes
being used for the background thumbnailing service that might not make
the two cases directly comparable.

It would be valuable for an adblockplus developer to investigate, certainly.


Unless I missed something, this is about Adblock Plus supporting the original 
incarnation of Firefox Mobile, the one with two separate processes for chrome 
and content. This code is long gone in the current Adblock Plus versions - it 
was a real pain to support due to lots of unavoidable code duplication. The 
last version still having it is Adblock Plus 1.3.10.

The code in question was explicitly running in Firefox Mobile only. It used 
messageManager.loadFrameScript() API to inject code into the content process of new tabs - I 
doubt that it will work the same here, Adblock Plus would probably need to look explicitly for 
thesebrowser remote=true  elements (is there an event when they are 
created?).

Altogether, supporting this in Adblock Plus should be possible - but it will 
require significant amounts of additional code and introduce quite a bit of new 
complexity. I also have doubts whether this is work that should receive 
priority.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Ehsan Akhgari
On Thu, Aug 1, 2013 at 7:46 PM, Mike Hommey m...@glandium.org wrote:

 On Thu, Aug 01, 2013 at 04:25:25PM -0700, Matt Brubeck wrote:
  Debian doesn't keep Iceweasel up to date in oldstable anyway.

 Actually, I'm providing backports for oldstable. 24 is as far as I'm
 ready to go to support oldstable until its actual EOL next year. Which
 is why i want ESR24 to remain compilable with gcc 4.4.


Upgrading minimum compiler requirements doesn't imply backporting those
requirements to Aurora where ESR24 is right now.  Are you opposed to
updating our minimum supported gcc to 4.7 on trunk when Firefox OS is ready
to switch?

Cheers,
--
Ehsan
http://ehsanakhgari.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Mike Hommey
On Fri, Aug 02, 2013 at 08:27:02AM -0400, Ehsan Akhgari wrote:
 On Thu, Aug 1, 2013 at 7:46 PM, Mike Hommey m...@glandium.org wrote:
 
  On Thu, Aug 01, 2013 at 04:25:25PM -0700, Matt Brubeck wrote:
   Debian doesn't keep Iceweasel up to date in oldstable anyway.
 
  Actually, I'm providing backports for oldstable. 24 is as far as I'm
  ready to go to support oldstable until its actual EOL next year. Which
  is why i want ESR24 to remain compilable with gcc 4.4.
 
 
 Upgrading minimum compiler requirements doesn't imply backporting those
 requirements to Aurora where ESR24 is right now.  Are you opposed to
 updating our minimum supported gcc to 4.7 on trunk when Firefox OS is ready
 to switch?

Not at all, as long as ESR24 keeps building with gcc 4.4. I've even been
complaining about b2g still using gcc 4.4 on trunk...

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Philip Chee
On 02/08/2013 16:57, t...@adblockplus.org wrote:

 The code in question was explicitly running in Firefox Mobile only.
 It used messageManager.loadFrameScript() API to inject code into the
 content process of new tabs - I doubt that it will work the same
 here, Adblock Plus would probably need to look explicitly for these
 browser remote=true elements (is there an event when they are
 created?).
 
 Altogether, supporting this in Adblock Plus should be possible - but
 it will require significant amounts of additional code and introduce
 quite a bit of new complexity. I also have doubts whether this is
 work that should receive priority.

It has just occurred to me that Flashblock would probably be affected
similarly.

Phil

-- 
Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Fri, Aug 2, 2013 at 2:58 PM, Mike Hommey m...@glandium.org wrote:

  Upgrading minimum compiler requirements doesn't imply backporting those
  requirements to Aurora where ESR24 is right now.  Are you opposed to
  updating our minimum supported gcc to 4.7 on trunk when Firefox OS is
 ready
  to switch?

 Not at all, as long as ESR24 keeps building with gcc 4.4. I've even been
 complaining about b2g still using gcc 4.4 on trunk...


This adds too much risk of security patches failing to backport from
mozilla-central to ESR 24. Remember that one of the design goals of ESR is
to minimize the amount of effort we put into it so that ESR doesn't slow
down real Firefox. AFAICT, most people don't even want ESR at all. So, a
constraint to keep ESR 24 compatible with GCC needs to include some
resources for doing the backports.

Cheers,
Real Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Fri, Aug 2, 2013 at 1:36 AM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 On 8/1/2013 5:46 PM, Brian Smith wrote:

 FWIW, I talked about this issue with a group of ~10 Mozillians here in
 Berlin and all of them (AFAICT) were in favor of requiring that the latest
 versions of GCC be used, or even dropping GCC support completely in favor
 of clang, if it means that we can use more C++ language features and if it
 means we can avoid wasting time writing polyfills. Nobody saw installing a
 new version of GCC as part of the build environment as being a significant
 impediment.


 And how many of them have actually tried to install new versions of gcc
 from scratch? As someone who works in compiler development, I can tell you
 firsthand that setting up working toolchains is an intricate dance of
 getting several tools to work together--the libc headers, the standard C++
 library headers, debugger, linker, and compiler are all maintained by
 different projects, and a version mismatch between any two of these can
 foul up getting things to work that requires a lot of time and patience to
 fix even by people who know what they're doing. Look, for example, at some
 of the struggles we have had to go through to get Clang-on-Linux working on
 the buildbots.


We have mozilla-build for Windows. From what you say, it sounds like we
should have mozilla-build for Linux too that would include a pre-built GCC
or Clang or whatever we choose as *the* toolchain for desktop Linux.


 Also, the limiting factor in using new C++ features right now is b2g,
 which builds with g++-4.4. If we fixed that, the minimum version per this
 policy would be g++-4.7. the limiting factor would either be STLport (which
 is much slower to adopt C++11 functionality than other libraries tied
 primarily to one compiler) or MSVC, which has yet to implement several
 C++11 features.


Moving to GCC 4.7 is one of the requirements for the B2G system security
project so I hope that happens soon anyway. Also, the set of code that is
compiled for B2G is different (though, obviously overlapping) with the set
of code that is compiled for desktop. In fact, if my understand of bug
854389 is correct, we could ALREADY be building Gecko with GCC 4.7 on B2G
if we did one of two things: (1) Add a one-line patch to some Android
header file, or (2) compile gonk with GCC 4.4 and compile Gecko with GCC
4.7 (or clang). If we have any more delays in upgrading to Jelly Bean then
we should consider one or both of these options.


 Instead of arguing right now about whether or not the minimum version
 policy suggested by glandium and I is too conservative, perhaps we should
 wait until someone proposes a feature whose need for polyfilling would
 depend on that policy comes up.


That sounds reasonable to me. So, based on that then, let's get back to my
original question that motivated the discussion of the policy: If we add
std::move, std::forward, and std::unique_ptr to STLPort for Android and
B2G, can we start using std::move, std::forward, and std::unique_ptr
throughout Gecko?

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Jeff Gilbert
It's certainly worrying given the number of security- and privacy-related 
addons people rely on working. Seeing ads in thumbnails is relatively harmless 
(if disconcerting), but if someone is relying on an addon for important 
security or privacy reasons, and we auto-updated them and bypassed their 
protections, that's more serious.

-Jeff

- Original Message -
From: Philip Chee philip.c...@gmail.com
To: dev-platform@lists.mozilla.org
Sent: Friday, August 2, 2013 12:30:29 PM
Subject: Re: reminder: content processes (e10s) are now used by desktop Firefox

On 02/08/2013 16:57, t...@adblockplus.org wrote:

 The code in question was explicitly running in Firefox Mobile only.
 It used messageManager.loadFrameScript() API to inject code into the
 content process of new tabs - I doubt that it will work the same
 here, Adblock Plus would probably need to look explicitly for these
 browser remote=true elements (is there an event when they are
 created?).
 
 Altogether, supporting this in Adblock Plus should be possible - but
 it will require significant amounts of additional code and introduce
 quite a bit of new complexity. I also have doubts whether this is
 work that should receive priority.

It has just occurred to me that Flashblock would probably be affected
similarly.

Phil

-- 
Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


On builds getting slower

2013-08-02 Thread Gregory Szorc

(Cross posting. Please reply to dev.builds.)

I've noticed an increase in the number of complaints about the build 
system recently. I'm not surprised. Building mozilla-central has gotten 
noticeably slower. More on that below. But first, a request.


Many of the complaints I've heard have been from overhearing hallway 
conversations, noticing non-directed complaints on IRC, having 3rd 
parties report anecdotes, etc. *Please, please, please voice your 
complaints directly at me and the build peers.* Indirectly complaining 
isn't a very effective way to get attention or to spur action. I 
recommend posting to dev.builds so complaints and responses are public 
and easily archived. If you want a more personal conversation, just get 
in contact with me and I'll be happy to explain things.


Anyway, on to the concerns.

Builds are getting slower. http://brasstacks.mozilla.com/gofaster/#/ has 
high-level trends for our automation infrastructure. I've also noticed 
my personal machines taking ~2x longer than they did 2 years ago. 
Unfortunately, I can't give you a precise breakdown over where the 
increases have been because we don't do a very good job of recording 
these things. This is one reason why we have better monitoring on our Q3 
goals list.


Now, on to the reasons why builds are getting slower.

# We're adding new code at a significant rate.

Here is a breakdown of source file types in the tree by Gecko version. 
These are file types that are directly compiled or go through code 
generation to create a compiled file.


Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231 
WebIDL
Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441 
WebIDL


That nets totals of:

7: 7418
14: 8667
21: 10875
25: 11872

As you can see, we're steadily adding new source code files to the tree. 
mozilla-central today has 60% more source files than Gecko 7! If you 
assume number of source files is a rough approximation for compile time, 
it's obvious why builds are getting slower: we're building more.


As large new browser features like WebRTC and the ECMAScript 
Internationalization API continue to dump hundreds of new source files 
in the tree, build times will increase. There's nothing we can do about 
this short of freezing browser features. That's not going to happen.


# Header dependency hell

We have hundreds of header files that are included in hundreds or even 
thousands of other C++ files. Any time one of these widely-used headers 
changes, the object files get invalidated by the build system 
dependencies and we have to re-invoke the compiler. This also likely 
invalidates ccache, so it's just like a clobber build.


No matter what we do to the build backend to make clobber builds faster, 
header dependency hell will continue to undermine this progress for 
dependency builds.


I don't believe the build config group is in a position to tackle header 
dependency hell at this time. We are receptive to good ideas and will 
work with people to land patches. Perhaps an ad-hoc group of Platform 
developers can band together to address this?


# Increased reliance on C++ language features

I *suspect* that our increased reliance on C++ language features such as 
templates and new C++11 features is contributing to slower build times. 
It's been long known that templates and other advanced language features 
can blow up the compiler if used in certain ways. I also suspect that 
modern C++11 features haven't been optimized to the extent years-old C++ 
features have been. Combine this with the fact compilers are working 
harder than ever to optimize code and it wouldn't surprise me if a CPU 
cycle invested in the compiler isn't giving the returns it used to.


I would absolutely love for a compiler wizard to sit down and profile 
Gecko C++ in Clang, GCC, and MSVC. If there are things we can do to our 
source or to the compilers themselves to make things faster, that could 
be a huge win.


Like dependency hell, I don't believe the build config group will tackle 
this any time soon.


# Clobbers are more frequent and more annoying

Clobbers are annoying. It annoys me every time I see the CLOBBER file 
has been updated. I won't make excuses for open bugs on known 
required-clobber issues: we should fix them all.


I suspect clobbers have become more annoying in recent months because 
overall build times have increased. If builds only took 5 minutes, I'm 
not sure the cries would be as loud. That's no excuse for not fixing it, 
however. Please continue to loudly complain every time there is a clobber.


# Slowness Summary

There are many factors contributing to making the build system slower. I 
would argue that the primary contributors are not within the control of 
the build config group. Instead, the fault lives with all the compiled 
code (mainly 

Re: Standard C/C++ and Mozilla

2013-08-02 Thread Brian Smith
On Wed, Jul 31, 2013 at 7:41 PM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 implementation, libc++, libstdc++, and stlport. Since most nice charts of
 C++11 compatibility focus on what the compiler needs to do, I've put
 together a high-level overview of the major additions to the standard
 library [3]:
 * std::function/std::bind -- Generalization of function pointers


Note that Eric Rescorla implemented his own std::bind polyfill when he was
working on WebRTC. I also have some new code I am working on where
std::bind is extremely helpful.


 Now that you have the background for what is or will be in standard C++,
 let me discuss the real question I want to discuss: how much of this should
 we be using in Mozilla?



 For purposes of discussion, I think it's worth breaking down the C++ (and
 C) standard library into the following components:
 * Containers--vector, map, etc.
 * Strings
 * I/O
 * Platform support (threading, networking, filesystems, locales)
 * Other helpful utilities (std::random, std::tuple, etc.)

 The iostream library has some issues with using (particularly static
 constructors IIRC), and is not so usable for most of the things that Gecko
 needs to do.


It is very useful for building a logging interface that is safer and more
convenient than NSPR's printf-style logging. Note that, again, Eric
Rescorla already built an (partial) iostream-based wrapper around NSPR for
WebRTC. I would say that, if there is no additional overhead, then we
should consider making iostream-based logging the default way of doing
things in Gecko because it is so much less error-prone.


 Even if fully using the standard library is untenable from a performance
 perspective, usability may be enhanced if we align some of our APIs which
 mimic STL functionality with the actual STL APIs. For example, we could add
 begin()/end()/push_back()/etc. methods to nsTArray to make it a fairly
 drop-in replacement for std::vector, or at least close enough to one that
 it could be used in other STL APIs (like std::sort, std::find, etc.).
 However, this does create massive incongruities in our API, since the
 standard library prefers naming stuff with this_kind_of_convention whereas
 most Mozilla style guides prefer ThisKindOfConvention.


Perhaps a more annoying issue--though not a showstoper--is that
unique_ptr::release() means something quite different than
nsXXXPtr::Release() means.


 With all of that stated, the questions I want to pose to the community at
 large are as follows:
 1. How much, and where, should we be using standard C++ library
 functionality in Mozilla code?


We should definitely prefer using the standard C++ library over writing any
new code for MFBT, *unless* there is consensus that the new thing we'd do
in MFBT is substantially clearer. (For example, I think some people
successfully argued that we should have our own atomic types because our
interface is clearly better than std::atomic.)

Even in the case where MFBT or XPCOM stuff is generally better, We should
*allow* using the standard C++ library anywhere that has additional
constraints that warrant a different tradeoff; e.g. needing to be built
separately from Gecko and/or otherwise needing to minimize Gecko
dependencies.


 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


Generally, I would much rather we implement std::whatever ourselves than
implement mozilla::Whatever, all other things being equal. This saves us
from the massive rewrites later to s/mozilla::Whatever/std::whatever/;
while such rewrites are generally a net win, they are still disruptive
enough to warrant trying to avoid them when possible. In the case where it
is just STLPort being behind, we should just add the thing to STLPort (and
try to upstream it). in the case where the lack of support for a useful
standard library feature is more widespread, we should still implement
std::whatever if the language support we have enables us to do so. I am not
sure where such implementations should live.


 4. When should we prefer our own implementations to standard library
 implementations?


It is a judgement call. The default should be to use standard library
functions, but we shouldn't be shy about using our own stuff if it is
clearly better. On the other side, we shouldn't be shy about replacing uses
of same-thing-but-different Mozilla-specific libraries with uses of the
standard libraries, all things being equal.


 5. To what degree should our platform-bridging libraries
 (xpcom/mfbt/necko/nspr) use or align with the C++ standard library?


I am not sure why you include Necko in that list. Did you mean NSS? For
NSPR and NSS, I would like to include some very basic utilities like
ScopedPRFileDesc that are included directly in NSPR/NSS, so that we can use
them in GTest-based tests, even if NSPR and NSS otherwise stick with C.
But, I don't know if the module owners of those modules will accept them.


 6. Where support for an API 

Re: Standard C/C++ and Mozilla

2013-08-02 Thread Ethan Hugg
It is very useful for building a logging interface that is safer and more
convenient than NSPR's printf-style logging. Note that, again, Eric
Rescorla already built an (partial) iostream-based wrapper around NSPR for
WebRTC. I would say that, if there is no additional overhead, then we
should consider making iostream-based logging the default way of doing
things in Gecko because it is so much less error-prone.

I found this comment interesting.  It wasn't that long ago I was instructed
to get rid of all iostream-based logging from media/webrtc/signaling and
media/mtransport if it we wanted the logging to appear in opt builds.

https://bugzilla.mozilla.org/show_bug.cgi?id=795126
https://bugzilla.mozilla.org/show_bug.cgi?id=841899

I agree that iostream-based logging would be safer.  If we had it I
wouldn't have had to work on this one:

https://bugzilla.mozilla.org/show_bug.cgi?id=855335

Can we now use iostreams throughout this code?

-EH




On Fri, Aug 2, 2013 at 2:21 PM, Brian Smith br...@briansmith.org wrote:

 On Wed, Jul 31, 2013 at 7:41 PM, Joshua Cranmer  pidgeo...@gmail.com
 wrote:

  implementation, libc++, libstdc++, and stlport. Since most nice charts of
  C++11 compatibility focus on what the compiler needs to do, I've put
  together a high-level overview of the major additions to the standard
  library [3]:
  * std::function/std::bind -- Generalization of function pointers
 

 Note that Eric Rescorla implemented his own std::bind polyfill when he was
 working on WebRTC. I also have some new code I am working on where
 std::bind is extremely helpful.


  Now that you have the background for what is or will be in standard C++,
  let me discuss the real question I want to discuss: how much of this
 should
  we be using in Mozilla?
 


  For purposes of discussion, I think it's worth breaking down the C++ (and
  C) standard library into the following components:
  * Containers--vector, map, etc.
  * Strings
  * I/O
  * Platform support (threading, networking, filesystems, locales)
  * Other helpful utilities (std::random, std::tuple, etc.)
 
  The iostream library has some issues with using (particularly static
  constructors IIRC), and is not so usable for most of the things that
 Gecko
  needs to do.


 It is very useful for building a logging interface that is safer and more
 convenient than NSPR's printf-style logging. Note that, again, Eric
 Rescorla already built an (partial) iostream-based wrapper around NSPR for
 WebRTC. I would say that, if there is no additional overhead, then we
 should consider making iostream-based logging the default way of doing
 things in Gecko because it is so much less error-prone.


  Even if fully using the standard library is untenable from a performance
  perspective, usability may be enhanced if we align some of our APIs which
  mimic STL functionality with the actual STL APIs. For example, we could
 add
  begin()/end()/push_back()/etc. methods to nsTArray to make it a fairly
  drop-in replacement for std::vector, or at least close enough to one that
  it could be used in other STL APIs (like std::sort, std::find, etc.).
  However, this does create massive incongruities in our API, since the
  standard library prefers naming stuff with this_kind_of_convention
 whereas
  most Mozilla style guides prefer ThisKindOfConvention.
 

 Perhaps a more annoying issue--though not a showstoper--is that
 unique_ptr::release() means something quite different than
 nsXXXPtr::Release() means.


  With all of that stated, the questions I want to pose to the community at
  large are as follows:
  1. How much, and where, should we be using standard C++ library
  functionality in Mozilla code?
 

 We should definitely prefer using the standard C++ library over writing any
 new code for MFBT, *unless* there is consensus that the new thing we'd do
 in MFBT is substantially clearer. (For example, I think some people
 successfully argued that we should have our own atomic types because our
 interface is clearly better than std::atomic.)

 Even in the case where MFBT or XPCOM stuff is generally better, We should
 *allow* using the standard C++ library anywhere that has additional
 constraints that warrant a different tradeoff; e.g. needing to be built
 separately from Gecko and/or otherwise needing to minimize Gecko
 dependencies.


  3. How should we handle bridge support for standardized features not yet
  universally-implemented?
 

 Generally, I would much rather we implement std::whatever ourselves than
 implement mozilla::Whatever, all other things being equal. This saves us
 from the massive rewrites later to s/mozilla::Whatever/std::whatever/;
 while such rewrites are generally a net win, they are still disruptive
 enough to warrant trying to avoid them when possible. In the case where it
 is just STLPort being behind, we should just add the thing to STLPort (and
 try to upstream it). in the case where the lack of support for a useful
 standard library feature is more 

Re: Standard C/C++ and Mozilla

2013-08-02 Thread Justin Lebar
 I agree that iostream-based logging would be safer.  If we had it I
 wouldn't have had to work on this one:

 https://bugzilla.mozilla.org/show_bug.cgi?id=855335

I can't access that bug, but maybe you mean
https://bugzilla.mozilla.org/show_bug.cgi?id=onelogger ?

I feel like the goals there are orthogonal to NSPR vs iostream.

I haven't had a chance to work on this lately, but I do intend to land
something when I can.

On Fri, Aug 2, 2013 at 2:41 PM, Ethan Hugg ethanh...@gmail.com wrote:
It is very useful for building a logging interface that is safer and more
convenient than NSPR's printf-style logging. Note that, again, Eric
Rescorla already built an (partial) iostream-based wrapper around NSPR for
WebRTC. I would say that, if there is no additional overhead, then we
should consider making iostream-based logging the default way of doing
things in Gecko because it is so much less error-prone.

 I found this comment interesting.  It wasn't that long ago I was instructed
 to get rid of all iostream-based logging from media/webrtc/signaling and
 media/mtransport if it we wanted the logging to appear in opt builds.

 https://bugzilla.mozilla.org/show_bug.cgi?id=795126
 https://bugzilla.mozilla.org/show_bug.cgi?id=841899

 I agree that iostream-based logging would be safer.  If we had it I
 wouldn't have had to work on this one:

 https://bugzilla.mozilla.org/show_bug.cgi?id=855335

 Can we now use iostreams throughout this code?

 -EH




 On Fri, Aug 2, 2013 at 2:21 PM, Brian Smith br...@briansmith.org wrote:

 On Wed, Jul 31, 2013 at 7:41 PM, Joshua Cranmer  pidgeo...@gmail.com
 wrote:

  implementation, libc++, libstdc++, and stlport. Since most nice charts of
  C++11 compatibility focus on what the compiler needs to do, I've put
  together a high-level overview of the major additions to the standard
  library [3]:
  * std::function/std::bind -- Generalization of function pointers
 

 Note that Eric Rescorla implemented his own std::bind polyfill when he was
 working on WebRTC. I also have some new code I am working on where
 std::bind is extremely helpful.


  Now that you have the background for what is or will be in standard C++,
  let me discuss the real question I want to discuss: how much of this
 should
  we be using in Mozilla?
 


  For purposes of discussion, I think it's worth breaking down the C++ (and
  C) standard library into the following components:
  * Containers--vector, map, etc.
  * Strings
  * I/O
  * Platform support (threading, networking, filesystems, locales)
  * Other helpful utilities (std::random, std::tuple, etc.)
 
  The iostream library has some issues with using (particularly static
  constructors IIRC), and is not so usable for most of the things that
 Gecko
  needs to do.


 It is very useful for building a logging interface that is safer and more
 convenient than NSPR's printf-style logging. Note that, again, Eric
 Rescorla already built an (partial) iostream-based wrapper around NSPR for
 WebRTC. I would say that, if there is no additional overhead, then we
 should consider making iostream-based logging the default way of doing
 things in Gecko because it is so much less error-prone.


  Even if fully using the standard library is untenable from a performance
  perspective, usability may be enhanced if we align some of our APIs which
  mimic STL functionality with the actual STL APIs. For example, we could
 add
  begin()/end()/push_back()/etc. methods to nsTArray to make it a fairly
  drop-in replacement for std::vector, or at least close enough to one that
  it could be used in other STL APIs (like std::sort, std::find, etc.).
  However, this does create massive incongruities in our API, since the
  standard library prefers naming stuff with this_kind_of_convention
 whereas
  most Mozilla style guides prefer ThisKindOfConvention.
 

 Perhaps a more annoying issue--though not a showstoper--is that
 unique_ptr::release() means something quite different than
 nsXXXPtr::Release() means.


  With all of that stated, the questions I want to pose to the community at
  large are as follows:
  1. How much, and where, should we be using standard C++ library
  functionality in Mozilla code?
 

 We should definitely prefer using the standard C++ library over writing any
 new code for MFBT, *unless* there is consensus that the new thing we'd do
 in MFBT is substantially clearer. (For example, I think some people
 successfully argued that we should have our own atomic types because our
 interface is clearly better than std::atomic.)

 Even in the case where MFBT or XPCOM stuff is generally better, We should
 *allow* using the standard C++ library anywhere that has additional
 constraints that warrant a different tradeoff; e.g. needing to be built
 separately from Gecko and/or otherwise needing to minimize Gecko
 dependencies.


  3. How should we handle bridge support for standardized features not yet
  universally-implemented?
 

 Generally, I would much rather we implement 

Re: Standard C/C++ and Mozilla

2013-08-02 Thread Ethan Hugg
Sorry I should've noticed that 855335 is a sec-bug.   It's title is Audit
SIPCC printf-style format strings which means we went through every
logging call and repaired a few which had incorrect printf-style args.

-EH



On Fri, Aug 2, 2013 at 2:44 PM, Justin Lebar justin.le...@gmail.com wrote:

  I agree that iostream-based logging would be safer.  If we had it I
  wouldn't have had to work on this one:
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=855335

 I can't access that bug, but maybe you mean
 https://bugzilla.mozilla.org/show_bug.cgi?id=onelogger ?

 I feel like the goals there are orthogonal to NSPR vs iostream.

 I haven't had a chance to work on this lately, but I do intend to land
 something when I can.

 On Fri, Aug 2, 2013 at 2:41 PM, Ethan Hugg ethanh...@gmail.com wrote:
 It is very useful for building a logging interface that is safer and more
 convenient than NSPR's printf-style logging. Note that, again, Eric
 Rescorla already built an (partial) iostream-based wrapper around NSPR
 for
 WebRTC. I would say that, if there is no additional overhead, then we
 should consider making iostream-based logging the default way of doing
 things in Gecko because it is so much less error-prone.
 
  I found this comment interesting.  It wasn't that long ago I was
 instructed
  to get rid of all iostream-based logging from media/webrtc/signaling and
  media/mtransport if it we wanted the logging to appear in opt builds.
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=795126
  https://bugzilla.mozilla.org/show_bug.cgi?id=841899
 
  I agree that iostream-based logging would be safer.  If we had it I
  wouldn't have had to work on this one:
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=855335
 
  Can we now use iostreams throughout this code?
 
  -EH
 
 
 
 
  On Fri, Aug 2, 2013 at 2:21 PM, Brian Smith br...@briansmith.org
 wrote:
 
  On Wed, Jul 31, 2013 at 7:41 PM, Joshua Cranmer  pidgeo...@gmail.com
  wrote:
 
   implementation, libc++, libstdc++, and stlport. Since most nice
 charts of
   C++11 compatibility focus on what the compiler needs to do, I've put
   together a high-level overview of the major additions to the standard
   library [3]:
   * std::function/std::bind -- Generalization of function pointers
  
 
  Note that Eric Rescorla implemented his own std::bind polyfill when he
 was
  working on WebRTC. I also have some new code I am working on where
  std::bind is extremely helpful.
 
 
   Now that you have the background for what is or will be in standard
 C++,
   let me discuss the real question I want to discuss: how much of this
  should
   we be using in Mozilla?
  
 
 
   For purposes of discussion, I think it's worth breaking down the C++
 (and
   C) standard library into the following components:
   * Containers--vector, map, etc.
   * Strings
   * I/O
   * Platform support (threading, networking, filesystems, locales)
   * Other helpful utilities (std::random, std::tuple, etc.)
  
   The iostream library has some issues with using (particularly static
   constructors IIRC), and is not so usable for most of the things that
  Gecko
   needs to do.
 
 
  It is very useful for building a logging interface that is safer and
 more
  convenient than NSPR's printf-style logging. Note that, again, Eric
  Rescorla already built an (partial) iostream-based wrapper around NSPR
 for
  WebRTC. I would say that, if there is no additional overhead, then we
  should consider making iostream-based logging the default way of doing
  things in Gecko because it is so much less error-prone.
 
 
   Even if fully using the standard library is untenable from a
 performance
   perspective, usability may be enhanced if we align some of our APIs
 which
   mimic STL functionality with the actual STL APIs. For example, we
 could
  add
   begin()/end()/push_back()/etc. methods to nsTArray to make it a fairly
   drop-in replacement for std::vector, or at least close enough to one
 that
   it could be used in other STL APIs (like std::sort, std::find, etc.).
   However, this does create massive incongruities in our API, since the
   standard library prefers naming stuff with this_kind_of_convention
  whereas
   most Mozilla style guides prefer ThisKindOfConvention.
  
 
  Perhaps a more annoying issue--though not a showstoper--is that
  unique_ptr::release() means something quite different than
  nsXXXPtr::Release() means.
 
 
   With all of that stated, the questions I want to pose to the
 community at
   large are as follows:
   1. How much, and where, should we be using standard C++ library
   functionality in Mozilla code?
  
 
  We should definitely prefer using the standard C++ library over writing
 any
  new code for MFBT, *unless* there is consensus that the new thing we'd
 do
  in MFBT is substantially clearer. (For example, I think some people
  successfully argued that we should have our own atomic types because our
  interface is clearly better than std::atomic.)
 
  Even in the case where 

Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-02 Thread Asa Dotzler

On 8/2/2013 1:52 PM, Jeff Gilbert wrote:

It's certainly worrying given the number of security- and privacy-related 
addons people rely on working. Seeing ads in thumbnails is relatively harmless 
(if disconcerting), but if someone is relying on an addon for important 
security or privacy reasons, and we auto-updated them and bypassed their 
protections, that's more serious.

-Jeff


I think it's up to add-ons to keep up with Firefox, not the other way 
around. We give them no less than 3 months to adjust to our changes. Is 
that not enough time?


- A

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Ehsan Akhgari
First of all, I'd like to thank you and the rest of the build peers for 
your tireless efforts!


On 2013-08-02 5:13 PM, Gregory Szorc wrote:

(Cross posting. Please reply to dev.builds.)


Sorry, but cross-posting to both lists.  I don't think most of the 
people interested in this conversation are on dev.builds (I am, FWIW.)



I've noticed an increase in the number of complaints about the build
system recently. I'm not surprised. Building mozilla-central has gotten
noticeably slower. More on that below. But first, a request.

Many of the complaints I've heard have been from overhearing hallway
conversations, noticing non-directed complaints on IRC, having 3rd
parties report anecdotes, etc. *Please, please, please voice your
complaints directly at me and the build peers.* Indirectly complaining
isn't a very effective way to get attention or to spur action. I
recommend posting to dev.builds so complaints and responses are public
and easily archived. If you want a more personal conversation, just get
in contact with me and I'll be happy to explain things.


This is fair, but really the builds getting slower is so obvious that I 
would be surprised if none of the build config peers have noticed it in 
their daily work.  :-)



Builds are getting slower. http://brasstacks.mozilla.com/gofaster/#/ has
high-level trends for our automation infrastructure. I've also noticed
my personal machines taking ~2x longer than they did 2 years ago.
Unfortunately, I can't give you a precise breakdown over where the
increases have been because we don't do a very good job of recording
these things. This is one reason why we have better monitoring on our Q3
goals list.


My anecdotal evidence also matches the 2x slower metric.


Now, on to the reasons why builds are getting slower.

# We're adding new code at a significant rate.

Here is a breakdown of source file types in the tree by Gecko version.
These are file types that are directly compiled or go through code
generation to create a compiled file.

Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231
WebIDL
Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441
WebIDL

That nets totals of:

7: 7418
14: 8667
21: 10875
25: 11872

As you can see, we're steadily adding new source code files to the tree.
mozilla-central today has 60% more source files than Gecko 7! If you
assume number of source files is a rough approximation for compile time,
it's obvious why builds are getting slower: we're building more.

As large new browser features like WebRTC and the ECMAScript
Internationalization API continue to dump hundreds of new source files
in the tree, build times will increase. There's nothing we can do about
this short of freezing browser features. That's not going to happen.


Hmm.  I'm not sure if the number of source files is directly correlated 
to build times, but yeah there's clearly a trend here!



# Header dependency hell

We have hundreds of header files that are included in hundreds or even
thousands of other C++ files. Any time one of these widely-used headers
changes, the object files get invalidated by the build system
dependencies and we have to re-invoke the compiler. This also likely
invalidates ccache, so it's just like a clobber build.

No matter what we do to the build backend to make clobber builds faster,
header dependency hell will continue to undermine this progress for
dependency builds.

I don't believe the build config group is in a position to tackle header
dependency hell at this time. We are receptive to good ideas and will
work with people to land patches. Perhaps an ad-hoc group of Platform
developers can band together to address this?


I have been playing with an idea in my head about this.  What if we had 
a list of the most popular headers in our tree, and we looked through 
them and tried to cut down the number of #includes in the headers?  That 
should help create more isolated sub-graphs and hopefully help with 
breaking the most severe dependency chains.


Writing a tool to spit out this information should be fairly easy.


# Increased reliance on C++ language features

I *suspect* that our increased reliance on C++ language features such as
templates and new C++11 features is contributing to slower build times.
It's been long known that templates and other advanced language features
can blow up the compiler if used in certain ways. I also suspect that
modern C++11 features haven't been optimized to the extent years-old C++
features have been. Combine this with the fact compilers are working
harder than ever to optimize code and it wouldn't surprise me if a CPU
cycle invested in the compiler isn't giving the returns it used to.

I would absolutely love for a compiler wizard to sit down and profile
Gecko C++ in Clang, GCC, and MSVC. If there are things we can do to 

Re: Standard C/C++ and Mozilla

2013-08-02 Thread Ehsan Akhgari

On 2013-08-02 5:21 PM, Brian Smith wrote:

3. How should we handle bridge support for standardized features not yet
universally-implemented?



Generally, I would much rather we implement std::whatever ourselves than
implement mozilla::Whatever, all other things being equal.


Yes, but it's still not clear to me why you prefer this.


This saves us
from the massive rewrites later to s/mozilla::Whatever/std::whatever/;
while such rewrites are generally a net win, they are still disruptive
enough to warrant trying to avoid them when possible.


Disruptive in what sense?  I recently did two of these kinds of 
conversions and nobody complained.



In the case where it
is just STLPort being behind, we should just add the thing to STLPort (and
try to upstream it). in the case where the lack of support for a useful
standard library feature is more widespread, we should still implement
std::whatever if the language support we have enables us to do so. I am not
sure where such implementations should live.


Yes, upstreaming fixes is clearly ideal, but sometimes pragmatism wins. 
 For example, I personally wouldn't have the first clue what I need to 
do in order to modify STLport (how to make b2g/Android builds use my 
modified library, how to upstream the fix, what to do when we pick up 
the changes, how long that would take, what to do if my changes are not 
accepted upstream, etc.)



4. When should we prefer our own implementations to standard library
implementations?



It is a judgement call. The default should be to use standard library
functions, but we shouldn't be shy about using our own stuff if it is
clearly better. On the other side, we shouldn't be shy about replacing uses
of same-thing-but-different Mozilla-specific libraries with uses of the
standard libraries, all things being equal.


If you agree that it's a judgement call, then prescribing what the 
default should be is, well, also a judgement call!



6. Where support for an API we wish to use is not universal, what is the
preferred way to mock that support?
[Note: similar questions also apply to NSPR and NSS with respect to newer
C99 and C11 functionality.]



There is no tolerance for mass changes like s/PRInt32/int32_t/ in NSPR or
NSS, AFAICT.


We mostly treat those libraries as read-only anyway, for the better or 
worse.



C99 and C11 are basically off the table too, because Microsoft
refuses to support them in MSVC.


Yes, focusing on improving C code like this is a lost cause.

Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Ehsan Akhgari

On 2013-08-02 4:49 PM, Brian Smith wrote:

That sounds reasonable to me. So, based on that then, let's get back to my
original question that motivated the discussion of the policy: If we add
std::move, std::forward, and std::unique_ptr to STLPort for Android and
B2G, can we start using std::move, std::forward, and std::unique_ptr
throughout Gecko?


Yes, if they're available in all of our environments, I don't see why 
not.  What we want to be careful with is how the STLport changes would 
work (we don't want to make builds fail if you just grab an Android NDK).


Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Ehsan Akhgari

On 2013-08-02 4:39 PM, Brian Smith wrote:

On Fri, Aug 2, 2013 at 2:58 PM, Mike Hommey m...@glandium.org
mailto:m...@glandium.org wrote:

  Upgrading minimum compiler requirements doesn't imply backporting
those
  requirements to Aurora where ESR24 is right now.  Are you opposed to
  updating our minimum supported gcc to 4.7 on trunk when Firefox
OS is ready
  to switch?

Not at all, as long as ESR24 keeps building with gcc 4.4. I've even been
complaining about b2g still using gcc 4.4 on trunk...


This adds too much risk of security patches failing to backport from
mozilla-central to ESR 24. Remember that one of the design goals of ESR
is to minimize the amount of effort we put into it so that ESR doesn't
slow down real Firefox. AFAICT, most people don't even want ESR at all.
So, a constraint to keep ESR 24 compatible with GCC needs to include
some resources for doing the backports.


How does this add too much risk?  Patches that we backport to ESR are 
usually fairly small, and there is already some risk involved as the 
codebases diverge, of course.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 3:38 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 # Building faster

 One of our Q3 goals is to replace the export tier with something more
 efficient. More on tiers at [1]. This should make builds faster,
 especially on pymake. Just earlier this week we made WebIDL and XPIDL
 code generation concurrent. Before, they executed serially, failing to
 utilize multiple CPU cores. Next steps are XPIDL code gen, installing
 headers, and preprocessing. This is all tracked in bug 892644.


 Out of curiosity, why was the export tier the fist target for this?  I may
 lack context here, but the slowest tier that we have is the platform libs
 tier.  Wouldn't focusing on that have given us the biggest possible bang
 for the buck?


Tier is the wrong term here[0].  I think it would be more correct to say
that we're removing the export phase.  Our build system currently visits
every[1] directory 3 times, once to build the 'export' target, once to
build the 'libs' target, and once to build the 'tools' target.  Tiers are
groupings of directories.  The build system guarantees that every directory
in a given tier has export, libs, and tools targets processed before doing
anything in the following tier.  The goal is to remove the export phase
across all tiers and replace it with a dedicated 'precompile' tier for the
things that need to be done before compiling C++/etc in the libs phase
(such as WebIDL/IPDL code generation, XPIDL header generation, putting
headers in dist/include, etc).

- Kyle

[0] at least in the sense that our build system has used it in the past.
[1] this isn't strictly true (e.g. TOOL_DIRS) but is close enough for the
purposes of this conversation.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Neil

Brian Smith wrote:


We have mozilla-build for Windows. From what you say, it sounds like we should 
have mozilla-build for Linux too that would include a pre-built GCC or Clang or 
whatever we choose as *the* toolchain for desktop Linux.

mozilla-build doesn't include a compiler or SDK. At one point we 
supported three (six if you include x64) compilers and three SDKs 
(except I don't think one of the compilers supported one of the SDKs, 
but that wasn't mozilla-build's fault).


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Ehsan Akhgari
It was brought to my attention that it's unclear what I'm looking for in
this conversation, so let me try to summarize.  I am not convinced that
there is something wrong with the way that things currently work (polyfill
where the feature is not available everywhere, else use it if
appropriate).  I'm trying to understand what the shortcomings of the
current behavior is.  I think it's best if we knew which problems we're
trying to solve first.

Cheers,

--
Ehsan
http://ehsanakhgari.org/


On Fri, Aug 2, 2013 at 6:47 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2013-08-02 5:21 PM, Brian Smith wrote:

 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


 Generally, I would much rather we implement std::whatever ourselves than
 implement mozilla::Whatever, all other things being equal.


 Yes, but it's still not clear to me why you prefer this.


  This saves us
 from the massive rewrites later to s/mozilla::Whatever/std::**whatever/;
 while such rewrites are generally a net win, they are still disruptive
 enough to warrant trying to avoid them when possible.


 Disruptive in what sense?  I recently did two of these kinds of
 conversions and nobody complained.


  In the case where it
 is just STLPort being behind, we should just add the thing to STLPort (and
 try to upstream it). in the case where the lack of support for a useful
 standard library feature is more widespread, we should still implement
 std::whatever if the language support we have enables us to do so. I am
 not
 sure where such implementations should live.


 Yes, upstreaming fixes is clearly ideal, but sometimes pragmatism wins.
  For example, I personally wouldn't have the first clue what I need to do
 in order to modify STLport (how to make b2g/Android builds use my modified
 library, how to upstream the fix, what to do when we pick up the changes,
 how long that would take, what to do if my changes are not accepted
 upstream, etc.)


  4. When should we prefer our own implementations to standard library
 implementations?


 It is a judgement call. The default should be to use standard library
 functions, but we shouldn't be shy about using our own stuff if it is
 clearly better. On the other side, we shouldn't be shy about replacing
 uses
 of same-thing-but-different Mozilla-specific libraries with uses of the
 standard libraries, all things being equal.


 If you agree that it's a judgement call, then prescribing what the
 default should be is, well, also a judgement call!


  6. Where support for an API we wish to use is not universal, what is the
 preferred way to mock that support?
 [Note: similar questions also apply to NSPR and NSS with respect to newer
 C99 and C11 functionality.]


 There is no tolerance for mass changes like s/PRInt32/int32_t/ in NSPR or
 NSS, AFAICT.


 We mostly treat those libraries as read-only anyway, for the better or
 worse.


  C99 and C11 are basically off the table too, because Microsoft
 refuses to support them in MSVC.


 Yes, focusing on improving C code like this is a lost cause.

 Ehsan


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


On indirect feedback

2013-08-02 Thread Robert O'Callahan
On Sat, Aug 3, 2013 at 9:13 AM, Gregory Szorc g...@mozilla.com wrote:

 Many of the complaints I've heard have been from overhearing hallway
 conversations, noticing non-directed complaints on IRC, having 3rd parties
 report anecdotes, etc. *Please, please, please voice your complaints
 directly at me and the build peers.* Indirectly complaining isn't a very
 effective way to get attention or to spur action.


Yes! Indirect feedback is antisocial and destructive.
http://robert.ocallahan.org/2013/05/over-time-ive-become-increasingly.htmlFWIW.

Even if you're just the recipient of indirect feedback, you can help, by
refusing to hear it until direct feedback has been given.

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Gregory Szorc

On 8/2/13 3:38 PM, Ehsan Akhgari wrote:

Hmm.  I'm not sure if the number of source files is directly correlated
to build times, but yeah there's clearly a trend here!


I concede a lines of code count would be a better indicator. I'm lazy.


# Header dependency hell

I have been playing with an idea in my head about this.  What if we had
a list of the most popular headers in our tree, and we looked through
them and tried to cut down the number of #includes in the headers?  That
should help create more isolated sub-graphs and hopefully help with
breaking the most severe dependency chains.

Writing a tool to spit out this information should be fairly easy.


I'll try to get a tool in the tree for people to run. 
https://bugzilla.mozilla.org/show_bug.cgi?id=901132



# Increased reliance on C++ language features

But I'm not convinced at all about the C++11 features contributing to
this.  I cannot think of any reason at all why that should be the case
for the things that we've started to use.  Do you have any evidence to
implicate some of those features?


No. Just my general distrust of new/young vs mature software.


# Clobbers are more frequent and more annoying

This should be relatively easy to address (compared to the other
things that we can do, of course).  I assert that every time we touch
the CLOBBER file, it's because the build system could not figure out the
dependencies properly.  Fortunately we can easily log the CLOBBER file
and go back in time and find all of the patches that included CLOBBER
modifications and debug the build dependency issues.  Has there been any
effort to address these issues by looking at the testcases that we have
in form of patches?


To some degree, yes. https://bugzilla.mozilla.org/show_bug.cgi?id=890744 
is a good example. Vacation schedules didn't align for quick action. 
There may also be a pymake bug or two involved.


Also, you could say people have been touching CLOBBER prematurely. I 
know there are a few cases where CLOBBER was touched in hopes it fixed a 
problem, didn't, and the commit history was left with a changeset that 
changed CLOBBER.



# Slowness Summary

Every time that we don't utilize 100% of our cores during the build
process, that's an unnecessary slowdown.  That consistently wastes a lot
of time during every build, and it also means that we can't address this
by getting more powerful machines.  :(


Right. If you plot CPU usage vs time, we can make the build faster by 
filling out the box and using 100% of all cores or by decreasing the 
total number of required CPU cycles to build. We have chosen to focus 
mostly on the former because optimizing build actions can be a lot of 
work. We've got lucky in some cases (e.g. WebIDLs in bug 861587). I fear 
compiling C++ will be much harder. I'm hoping PCH and fixing dependency 
hell are medium-hanging fruits.


I also have measurements that show we peak out at certain concurrency 
levels. The trend in CPUs is towards more cores, not higher clock speed. 
So focusing on effective core usage will continue to be important. 
Derecursifying the build will allow us to use more cores because make 
won't be starved during directory traversal. Remember, concurrent make 
only works within the same directory or for directories under 
PARALLEL_DIRS. Different top-level directories during tier traversal 
(e.g. dom and xpcom) are executed sequentially.



# Building faster

One of our Q3 goals is to replace the export tier with something more
efficient. More on tiers at [1]. This should make builds faster,
especially on pymake. Just earlier this week we made WebIDL and XPIDL
code generation concurrent. Before, they executed serially, failing to
utilize multiple CPU cores. Next steps are XPIDL code gen, installing
headers, and preprocessing. This is all tracked in bug 892644.


Out of curiosity, why was the export tier the fist target for this?  I
may lack context here, but the slowest tier that we have is the platform
libs tier.  Wouldn't focusing on that have given us the biggest possible
bang for the buck?


Making platform libs faster will without a doubt have the biggest 
impact. We chose to start with export first for a few reasons.


First, it's simple. We had to start somewhere. platform/libs is a 
magnitude more complex. We are making major refactorings in export 
already and we felt it best to prove out concepts with export rather 
that going for the hardest problem first.


Second, export is mostly standalone targets. We would like to port the 
build backend bottom up instead of top down so we can make the 
dependencies right from the beginning. If we started with platform/lib, 
we'd have to hack something together now and revamp it with proper 
dependencies later.


Third, export is horribly inefficient. pymake spends an absurd amount of 
time traversing directories, parsing make files and doing very little 
for each directory in the export tier. Platform, by contrast, tends to 
have longer-running jobs 

Re: On builds getting slower

2013-08-02 Thread Gregory Szorc

On 8/2/13 4:43 PM, Robert O'Callahan wrote:

Nathan has just made an excellent post on this topic:
https://blog.mozilla.org/nfroyd/2013/08/02/i-got-99-problems-and-compilation-time-is-one-of-them/

It would be interesting to measure the number of non-blank precompiled
lines in each build, over time. This is probably going up faster than
the number of overall source lines, possibly explaining why build times
increase faster than just the increasing size of the code.

Greg, I assume the build team has data on where time is spent in various
phases of the build today. Can you point us to that data? Especially
valuable if you have data over several releases.


1) Pull my patch queue from 
https://hg.mozilla.org/users/gszorc_mozilla.com/gecko-patches/

2) Apply the build-resource-monitor and build-resources-display patches
3) $ mach build
4) $ mach build-resource-usage

The raw data is saved to objdir/.mozbuild/build_resources.json. It 
contains CPU, memory, and I/O measurements for every second during the 
build along with timing information for the different tiers, subtiers, 
and directories.


Currently, the HTML display is kinda crap. It only displays CPU and it 
looks horrible. I'm not a professional web developer! The main goal with 
the initial patch is to have data collection so we can do nice things 
with it later.


Also, I haven't tested on Windows in a while. You also need psutil to be 
able to capture the data. psutil is currently optional in our build 
system. To test if you have it, run |mach python| and try to |import 
psutil|. And, it likely won't work with a fresh source checkout because 
psutil is built in configure and mach invokes configure, so there's a 
chicken and egg problem. That's pretty much why this hasn't landed yet. 
Yeah, I need to land this. It's on my Q3 goals list. Bug 883209 tracks.


Unfortunately we don't have this granular data for the past and likely 
never will unless someone wants to rebase and take a bunch of 
measurements. We do have old buildbot logs, but those aren't too useful 
without timestamps on each line (this is one reason mach prefixes times 
on each line - and yes, there needs to be an option to disable that).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:51 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 This adds too much risk of security patches failing to backport from

 mozilla-central to ESR 24. Remember that one of the design goals of ESR
 is to minimize the amount of effort we put into it so that ESR doesn't
 slow down real Firefox. AFAICT, most people don't even want ESR at all.
 So, a constraint to keep ESR 24 compatible with GCC needs to include
 some resources for doing the backports.


 How does this add too much risk?  Patches that we backport to ESR are
 usually fairly small, and there is already some risk involved as the
 codebases diverge, of course.


There are two kinds of risks: The risk that any developer would need to
waste time on ESR just to support a product that isn't even Firefox on a
platform that virtually nobody uses, and the risk that comes with making
any changes to the security fix that you are trying to backport. The ideal
case (assuming we can't just kill ESR) is that your backport consists of
hg graft and hg push and you're done. That is what we should optimize
for, as far as supporting ESR is concerned. You are right, of course, that
ESR and mozilla-central diverge as mozilla-central is improved and there
are likely to be merge conflicts. But, we should not contribute to that
divergence unnecessarily.

How many developers are even insisting on building Firefox on a Linux
distro that insists on using GCC 4.4, who are unwilling to upgrade their
compiler? We're talking about a very, very small minority of people,
AFAICT. I know one of those people is Mike, who is a very, very important
Mozillian who I definitely do not intend any insult. But, it really does
seem to me that instead of us trying to bending to the desires of the most
conservative distros, the rational decision is to ask those distros who
insist on using very old tools for very long periods of time to solve the
problem that they've caused themselves with their choices. I think we could
could still feel really good about how Linux-friendly we are even if we
shifted more of these kinds of burdens onto the distros.

Again, no offense intended for Mike or any other maintainer of any Linux
distro. I have nothing against Debian or any group.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Boris Zbarsky

On 8/2/13 8:14 PM, Brian Smith wrote:

The risk that any developer would need to
waste time on ESR just to support a product that isn't even Firefox on a
platform that virtually nobody uses, and the risk that comes with making
any changes to the security fix that you are trying to backport.


I feel that there's an important piece of data missing here: how many 
patches get backported to ESR in practice?


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On indirect feedback

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 1:32 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sat, Aug 3, 2013 at 9:13 AM, Gregory Szorc g...@mozilla.com wrote:

  Many of the complaints I've heard have been from overhearing hallway
  conversations, noticing non-directed complaints on IRC, having 3rd
 parties
  report anecdotes, etc. *Please, please, please voice your complaints
  directly at me and the build peers.* Indirectly complaining isn't a very
  effective way to get attention or to spur action.
 

 Yes! Indirect feedback is antisocial and destructive.

 http://robert.ocallahan.org/2013/05/over-time-ive-become-increasingly.htmlFWIW
 .

 Even if you're just the recipient of indirect feedback, you can help, by
 refusing to hear it until direct feedback has been given.


Rob,

I think some people may interpret what you say in that last paragraph the
opposite of how you intend. I am pretty sure you mean something like If
somebody starts to complain to you about somebody else, then stop them and
ask them to first talk to the person they were trying to complain about.

I recommend that, when you hear that people are giving indirect feedback
about you or your work to others, that you seek them out in person (or
video calling, if there's too much distance). I've also found that people
often assume that I'm going to be difficult to talk with because of the
direct way I write; seeking people out for face-to-face discussions seems
to have had the side-effect of making it easier for people to read my email
with the correct tone. For the same reason, I highly recommend showing up
at that person's desk over emailing them, if at all possible.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:50 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2013-08-02 4:49 PM, Brian Smith wrote:

 That sounds reasonable to me. So, based on that then, let's get back to my
 original question that motivated the discussion of the policy: If we add
 std::move, std::forward, and std::unique_ptr to STLPort for Android and
 B2G, can we start using std::move, std::forward, and std::unique_ptr
 throughout Gecko?


 Yes, if they're available in all of our environments, I don't see why not.
  What we want to be careful with is how the STLport changes would work (we
 don't want to make builds fail if you just grab an Android NDK).


I am not quite sure what you mean. Here is the workflow that I was
envisioning for solving this problem:

1. Add std::move, std::forward, and std::unique_ptr to STLPort (backporting
them from STLPort's git master, with as few changes as possible).
2. Write a patch that changes something in Gecko to use std::move,
std::forward, and std::unique_ptr.
3. Push that patch to try (try: -b o -p all -u all -t none).
4. If all the builds build, and all the tests pass, then ask for review.
5. After r+, land on mozilla-inbound. If all the builds build, and all the
tests pass, then anybody/everybody is free to use std::move, std::forward,
and std::unique_ptr.

To me, this is the most (only?) reasonable way to decide when enough
configurations support a language feature/library we are considering using.

Cheers,
Brian



 Ehsan




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Mike Hommey
On Fri, Aug 02, 2013 at 06:47:08PM -0400, Ehsan Akhgari wrote:
 On 2013-08-02 5:21 PM, Brian Smith wrote:
 3. How should we handle bridge support for standardized features not yet
 universally-implemented?
 
 
 Generally, I would much rather we implement std::whatever ourselves than
 implement mozilla::Whatever, all other things being equal.
 
 Yes, but it's still not clear to me why you prefer this.
 
 This saves us
 from the massive rewrites later to s/mozilla::Whatever/std::whatever/;
 while such rewrites are generally a net win, they are still disruptive
 enough to warrant trying to avoid them when possible.
 
 Disruptive in what sense?  I recently did two of these kinds of
 conversions and nobody complained.
 
 In the case where it
 is just STLPort being behind, we should just add the thing to STLPort (and
 try to upstream it). in the case where the lack of support for a useful
 standard library feature is more widespread, we should still implement
 std::whatever if the language support we have enables us to do so. I am not
 sure where such implementations should live.
 
 Yes, upstreaming fixes is clearly ideal, but sometimes pragmatism
 wins.  For example, I personally wouldn't have the first clue what I
 need to do in order to modify STLport (how to make b2g/Android
 builds use my modified library, how to upstream the fix, what to do
 when we pick up the changes, how long that would take, what to do if
 my changes are not accepted upstream, etc.)

Android and b2g are now using an in-tree STLport copy. We can patch it
if necessary, and it will be used everywhere.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Mike Hommey
On Fri, Aug 02, 2013 at 02:44:57PM -0700, Justin Lebar wrote:
  I agree that iostream-based logging would be safer.  If we had it I
  wouldn't have had to work on this one:
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=855335
 
 I can't access that bug, but maybe you mean
 https://bugzilla.mozilla.org/show_bug.cgi?id=onelogger ?
 
 I feel like the goals there are orthogonal to NSPR vs iostream.
 
 I haven't had a chance to work on this lately, but I do intend to land
 something when I can.

That sounds similar to what i wanted to do in bug 602467, but i never
got to get anywhere with it, besides ideas in some parts of my brain.
And yes, being free of static initializers is a must-have feature imho.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Mike Hommey
On Sat, Aug 03, 2013 at 02:14:10AM +0200, Brian Smith wrote:
 On Sat, Aug 3, 2013 at 12:51 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:
 
  This adds too much risk of security patches failing to backport from
 
  mozilla-central to ESR 24. Remember that one of the design goals of ESR
  is to minimize the amount of effort we put into it so that ESR doesn't
  slow down real Firefox. AFAICT, most people don't even want ESR at all.
  So, a constraint to keep ESR 24 compatible with GCC needs to include
  some resources for doing the backports.
 
 
  How does this add too much risk?  Patches that we backport to ESR are
  usually fairly small, and there is already some risk involved as the
  codebases diverge, of course.
 
 
 There are two kinds of risks: The risk that any developer would need to
 waste time on ESR just to support a product that isn't even Firefox on a
 platform that virtually nobody uses, and the risk that comes with making
 any changes to the security fix that you are trying to backport. The ideal
 case (assuming we can't just kill ESR) is that your backport consists of
 hg graft and hg push and you're done. That is what we should optimize
 for, as far as supporting ESR is concerned. You are right, of course, that
 ESR and mozilla-central diverge as mozilla-central is improved and there
 are likely to be merge conflicts. But, we should not contribute to that
 divergence unnecessarily.

All the refactoring we're doing is already making these divergences
more significant than supporting an older toolchain does. You can't just
hg graft and hg push over changes such as the s/PRBool/bool/ type
changes (and we're going to do a lot of them over the lifetime of
ESR24). I doubt we'd want to backport these refactorings to ESR.

So while i feel for your concern, it just hasn't much to do with the
toolchain problem.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:47 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2013-08-02 5:21 PM, Brian Smith wrote:

 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


 Generally, I would much rather we implement std::whatever ourselves than
 implement mozilla::Whatever, all other things being equal.


 Yes, but it's still not clear to me why you prefer this.


1. It avoids a phase of mass rewrites s/mozilla:Whatever/std::whatever/.
(See below).
2. It is reasonable to expect that std::whatever works as the C++ standard
says it should. It isn't reasonable to expect mozilla::Whatever to work
exactly like std::whatever. And, often, mozilla::Whatever isn't actually
the same as std::whatever.



  This saves us
 from the massive rewrites later to s/mozilla::Whatever/std::**whatever/;
 while such rewrites are generally a net win, they are still disruptive
 enough to warrant trying to avoid them when possible.


 Disruptive in what sense?  I recently did two of these kinds of
 conversions and nobody complained.


You have to rebase all your patches in your patch queue and/or run scripts
on your patches (that, IIRC, don't run on windows because mozilla-build
doesn't have sed -i). I'm not complaining about the conversions you've
done, because they are net wins. But, it's still less disruptive to avoid
unnecessary rounds of rewrites when possible, and
s/mozilla::Whatever/std::whatever/ seems unnecessary to me when we could
have just named mozilla::Whatever std::whatever to start with.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Joshua Cranmer 

On 8/2/2013 10:09 PM, Brian Smith wrote:
2. It is reasonable to expect that std::whatever works as the C++ 
standard says it should. It isn't reasonable to expect 
mozilla::Whatever to work exactly like std::whatever. And, often, 
mozilla::Whatever isn't actually the same as std::whatever


Judging by the recent record of std::atomic and the investigations into 
std::is_pod, this is not necessarily the case. STLport's std::is_pod 
does not return the correct answer for classes (i.e., the use case you 
probably most care about). Our ability to use std::atomic from libstdc++ 
has been steadily reduced due to it not supporting the feature set we 
want; just recently, we now require libstdc++ 4.7 to use it so we can 
get it to work with enums. Granted, atomic and type_traits are 
probably the most compiler-dependent headers of the lot, but it goes to 
show that just because it is implemented by a standard library doesn't 
mean it is correctly implemented.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Nicholas Nethercote
 Building mozilla-central has gotten noticeably slower.

Yep.  A bit over two years ago I started doing frequent browser builds
for the first time;  previously I'd mostly just worked with the JS
shell.  I was horrified by the ~25 minutes it took for a clobber
build.  I got a new Linux64 box and build times dropped to ~12
minutes, which made a *large* difference to my productivity.

On that same machine, I'm now back to ~25 minutes again.  I've assumed
it's due to more code, specifically:

1. more code in the repository;
2. more code generated explicitly (e.g. dom bindings);
3. more code generated implicitly (i.e. templates).

I don't know the relative impacts, though 1 is clearly a big part of it.

Even worse, link times are through the roof.  I was thrilled when, two
years ago, I switched from ld to gold and linking time plummeted.  The
first link after rebooting was always slow, but I could link libxul
back then in about 9 seconds.  I haven't measured recently but I'm
certain it's now *much* higher.  Even the JS shell, which used to take
hardly any time to link, now takes 10s or more;  enough that I often
switch to doing something else while waiting.

If I could speed up any part of the builds, it would be linking.
Waiting a long time to test a one file change sucks.


 # Header dependency hell

I've recently done a bunch of work on improving the header situation
in SpiderMonkey.  I can break it down to two main areas.

== MINIMIZING #include STATEMENTS ==

There's a clang tool called include-what-you-use, a.k.a. IWYU
(http://code.google.com/p/include-what-you-use/).  It tells you
exactly which headers should be included in all your files.  I've used
it to minimize #includes somewhat already
(https://bugzilla.mozilla.org/show_bug.cgi?id=634839) and I plan to do
some more Real Soon Now
(https://bugzilla.mozilla.org/show_bug.cgi?id=888768).  There are
still a couple of hundred unnecessary #include statements in
SpiderMonkey.  (BTW, SpiderMonkey has ~280 .cpp files and ~370 .h
files.)

IWYU is great, because it's really hard to figure this stuff out
manually.  It's also not perfect;  about 5% of its suggestions are
simply wrong, i.e. it says you can remove a #include that you can't.
Also, there are often project-specific idioms that it doesn't know
about -- there were several, but the one I remember off the top of my
head is that it was constantly suggesting I remove
mozilla/StandardInteger.h and add stdint.h (thankfully that's not
an issue any more :)  There are pragmas that you can annotate your
source with, but I found them to be not always work as advertised and
not really worth the effort.  Although IWYU basically works, it feels
a bit like software that doesn't get much maintenance.

I haven't been doing rigorous measurements, but I think that these
IWYU-related improvements don't do much for clobber builds, but can
help significantly with partial rebuilds.  It also just feels good to
make these improvements.

IWYU tells you the #includes that are unnecessary;  it also tells you
which ones are missing, i.e. which ones are being #included indirectly
through another header.  I've only bothered removing #includes because
adding the missing ones doesn't feel worthwhile.  Sometimes this means
that when you remove an unnecessary |#include a.h|, you have to add
a |#include b.h| because b.h was being pulled in only via a.h.  Not
a big deal.

Relatedly, jorendorff wrote a python script that identifies cycles in
header dependencies and diagnosed a cycle in SpiderMonkey that
involved *11* header files.  He and I broke that cycle in a series of
29 patches in
https://bugzilla.mozilla.org/show_bug.cgi?id=872416,
https://bugzilla.mozilla.org/show_bug.cgi?id=879831
https://bugzilla.mozilla.org/show_bug.cgi?id=886205.  Prior to the
last 9 patches, if you touched vm/Stack-inl.h and rebuilt, you'd
rebuild 125 .cpp files.  After these patches landed, it dropped to 30.
 The cycle-detection script has been incorporated into the |make
check-style| target that is about to land in
https://bugzilla.mozilla.org/show_bug.cgi?id=880088.

I've also done various bits of refactoring with an eye towards
simplifying the header dependencies.
https://bugzilla.mozilla.org/show_bug.cgi?id=880041 is one example.
These kinds of things can interact well with IWYU -- you do a
clean-up, then run IWYU to find all the #includes that are no longer
necessary.

Gregory suggested that headers aren't something that the build config
group can tackle, and I agree.  Modifying #include statements en masse
is much easier if you have some familiarity with the code.  You need a
sense of which headers should include which others, and often you have
to move code around.  So I encourage people to use IWYU on parts of
the code they are familiar with.  (Aryeh did this with editor/ in
https://bugzilla.mozilla.org/show_bug.cgi?id=772807.)

I should also note that this work is pretty tedious.  There's lots of
waiting for compilation, lots of try server runs to 

Re: On builds getting slower

2013-08-02 Thread L. David Baron
On Saturday 2013-08-03 13:36 +1000, Nicholas Nethercote wrote:
  # Header dependency hell
 
 I've recently done a bunch of work on improving the header situation
 in SpiderMonkey.  I can break it down to two main areas.
 
 == MINIMIZING #include STATEMENTS ==
 
 There's a clang tool called include-what-you-use, a.k.a. IWYU
 (http://code.google.com/p/include-what-you-use/).  It tells you
 exactly which headers should be included in all your files.  I've used
 it to minimize #includes somewhat already
 (https://bugzilla.mozilla.org/show_bug.cgi?id=634839) and I plan to do
 some more Real Soon Now
 (https://bugzilla.mozilla.org/show_bug.cgi?id=888768).  There are
 still a couple of hundred unnecessary #include statements in
 SpiderMonkey.  (BTW, SpiderMonkey has ~280 .cpp files and ~370 .h
 files.)

This tool sounds great.  I suspect there's even more to be gained
that it can't detect, though, from things that are used, but could
easily be made not used.

I did a few passes of poking through .deps/*.pp files, and looking
for things I thought didn't belong.  It's been a while, though.
(See bug 64023.)

khuey was also recently working on something to reduce some pretty
bad #include fanout related to the new DOM bindings generation.
(I'm not sure if it's landed.)

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 8:59 PM, L. David Baron dba...@dbaron.org wrote:

 khuey was also recently working on something to reduce some pretty
 bad #include fanout related to the new DOM bindings generation.
 (I'm not sure if it's landed.)


That was bug 887553.  I'll land it on Monday.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 9:12 PM, Kyle Huey m...@kylehuey.com wrote:

 On Fri, Aug 2, 2013 at 8:59 PM, L. David Baron dba...@dbaron.org wrote:

 khuey was also recently working on something to reduce some pretty
 bad #include fanout related to the new DOM bindings generation.
 (I'm not sure if it's landed.)


 That was bug 887553.  I'll land it on Monday.


Bah, I meant bug 887533.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Engineering meeting to be broadcast on Air Mozilla starting Aug 6, 2013

2013-08-02 Thread Lawrence Mandel
The weekly Engineering meeting will be broadcast on Air Mozilla starting this 
coming week (Aug 6, 2013). The reasons for broadcasting this meeting on Air 
Mozilla are to:

1. provide a recording of the meeting for those who unable to attend at the 
scheduled time
2. make it easier for people to attend (no need for Vidyo)

Recordings will be archived on Air Mozilla for 3 months.

As a reminder, Air Mozilla broadcasts and recordings are for a global audience. 
While no public engineering related topic is off limits, please be mindful of 
your language and tone.

Thanks,

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform