Re: On builds getting slower

2013-08-28 Thread Mark Hammond

On 28/08/2013 3:30 AM, Gregory Szorc wrote:
...
 Does anyone else see this libraries-always-rebuild behavior?

Not me - although I see just a few small ones - relevant parts of the log:

 5:20.54 webapprt.obj
 5:21.01 webapprt.cpp
 5:21.01
 5:21.05 webapprt-stub.exe
 5:21.07 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
...
 5:21.27 Output: 
o:\src\mozilla-git\central2\obj-i686-pc-mingw32\webapprt\win\instgen\webapp-uninstaller.exe

...
 5:29.50 nsBrowserApp.obj
 5:30.12 nsBrowserApp.cpp
 5:30.12
 5:30.15 firefox.exe
 5:30.32Creating library firefox.lib and object firefox.exp
 5:30.32
 5:30.35 Embedding manifest from 
o:/src/mozilla-git/central2/browser/app/firefox.exe.manifest

 5:31.38 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
...
 5:32.25 Output: 
o:\src\mozilla-git\central2\obj-i686-pc-mingw32\browser\installer\windows\instgen\helper.exe

...
 5:32.69 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
...
 5:32.90 Output: 
o:\src\mozilla-git\central2\obj-i686-pc-mingw32\browser\installer\windows\instgen\maintenanceservice_installer.exe


and that's it.

Mark

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-28 Thread Ted Mielczarek
On 8/28/2013 3:16 AM, Mark Hammond wrote:
 On 28/08/2013 3:30 AM, Gregory Szorc wrote:
 ...
  Does anyone else see this libraries-always-rebuild behavior?

 Not me - although I see just a few small ones - relevant parts of the
 log:

  5:20.54 webapprt.obj
  5:21.01 webapprt.cpp
  5:21.01
  5:21.05 webapprt-stub.exe
  5:21.07 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
 ...
  5:21.27 Output:
 o:\src\mozilla-git\central2\obj-i686-pc-mingw32\webapprt\win\instgen\webapp-uninstaller.exe
 ...
  5:29.50 nsBrowserApp.obj
  5:30.12 nsBrowserApp.cpp
  5:30.12
  5:30.15 firefox.exe
  5:30.32Creating library firefox.lib and object firefox.exp
  5:30.32
  5:30.35 Embedding manifest from
 o:/src/mozilla-git/central2/browser/app/firefox.exe.manifest
  5:31.38 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
 ...
  5:32.25 Output:
 o:\src\mozilla-git\central2\obj-i686-pc-mingw32\browser\installer\windows\instgen\helper.exe
 ...
  5:32.69 MakeNSIS v2.46-Unicode - Copyright 1995-2009 Contributors
 ...
  5:32.90 Output:
 o:\src\mozilla-git\central2\obj-i686-pc-mingw32\browser\installer\windows\instgen\maintenanceservice_installer.exe

 and that's it.
FYI these are a known consequence[1] of the Build ID being updated every
build (and then compiled into the application binary).

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=740359

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-27 Thread Benjamin Smedberg

On 8/26/2013 5:59 PM, Brian Smith wrote:

Immediate rebuild (no-op) was 10:13.05. A second no-op rebuild was
10:32.36. It looks like every shared library and every executable got
relinked.
This is a bug (rather serious at that!) which should be filed seprately 
from generic build speed discussions. If we are relinking 
unnecessarily, we should use debug output and just fix it. Let's talk 
directly about collecting the logs which would make this fixable.


--BDS
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-26 Thread Brian Smith
I talked to gps today and told him I would let him know my numbers on my
machine. I will share them with everybody:

My Win32 debug clobber build (mach build after mach clobber) was 39:54.84
today, up from ~33:00 a few months ago. Not sure if it is my system.
Immediate rebuild (no-op) was 10:13.05. A second no-op rebuild was
10:32.36. It looks like every shared library and every executable got
relinked.

This is on Windows7 64-bit, with 32GB of memory, an Intel 520 SSD, and an
Intel i7-3920XM @ 2.90Ghz/3.10Ghs. (Lenovo W530, plugged in, on Maximum
Performance power setting). I was doing other things (browsing, coding) in
the foreground while these builds ran in the background.

Cheers,
Brian



On Fri, Aug 2, 2013 at 2:13 PM, Gregory Szorc g...@mozilla.com wrote:

 (Cross posting. Please reply to dev.builds.)

 I've noticed an increase in the number of complaints about the build
 system recently. I'm not surprised. Building mozilla-central has gotten
 noticeably slower. More on that below. But first, a request.

 Many of the complaints I've heard have been from overhearing hallway
 conversations, noticing non-directed complaints on IRC, having 3rd parties
 report anecdotes, etc. *Please, please, please voice your complaints
 directly at me and the build peers.* Indirectly complaining isn't a very
 effective way to get attention or to spur action. I recommend posting to
 dev.builds so complaints and responses are public and easily archived. If
 you want a more personal conversation, just get in contact with me and I'll
 be happy to explain things.

 Anyway, on to the concerns.

 Builds are getting slower. 
 http://brasstacks.mozilla.com/**gofaster/#/http://brasstacks.mozilla.com/gofaster/#/has
  high-level trends for our automation infrastructure. I've also noticed
 my personal machines taking ~2x longer than they did 2 years ago.
 Unfortunately, I can't give you a precise breakdown over where the
 increases have been because we don't do a very good job of recording these
 things. This is one reason why we have better monitoring on our Q3 goals
 list.

 Now, on to the reasons why builds are getting slower.

 # We're adding new code at a significant rate.

 Here is a breakdown of source file types in the tree by Gecko version.
 These are file types that are directly compiled or go through code
 generation to create a compiled file.

 Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
 Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
 Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231
 WebIDL
 Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441
 WebIDL

 That nets totals of:

 7: 7418
 14: 8667
 21: 10875
 25: 11872

 As you can see, we're steadily adding new source code files to the tree.
 mozilla-central today has 60% more source files than Gecko 7! If you assume
 number of source files is a rough approximation for compile time, it's
 obvious why builds are getting slower: we're building more.

 As large new browser features like WebRTC and the ECMAScript
 Internationalization API continue to dump hundreds of new source files in
 the tree, build times will increase. There's nothing we can do about this
 short of freezing browser features. That's not going to happen.

 # Header dependency hell

 We have hundreds of header files that are included in hundreds or even
 thousands of other C++ files. Any time one of these widely-used headers
 changes, the object files get invalidated by the build system dependencies
 and we have to re-invoke the compiler. This also likely invalidates ccache,
 so it's just like a clobber build.

 No matter what we do to the build backend to make clobber builds faster,
 header dependency hell will continue to undermine this progress for
 dependency builds.

 I don't believe the build config group is in a position to tackle header
 dependency hell at this time. We are receptive to good ideas and will work
 with people to land patches. Perhaps an ad-hoc group of Platform developers
 can band together to address this?

 # Increased reliance on C++ language features

 I *suspect* that our increased reliance on C++ language features such as
 templates and new C++11 features is contributing to slower build times.
 It's been long known that templates and other advanced language features
 can blow up the compiler if used in certain ways. I also suspect that
 modern C++11 features haven't been optimized to the extent years-old C++
 features have been. Combine this with the fact compilers are working harder
 than ever to optimize code and it wouldn't surprise me if a CPU cycle
 invested in the compiler isn't giving the returns it used to.

 I would absolutely love for a compiler wizard to sit down and profile
 Gecko C++ in Clang, GCC, and MSVC. If there are things we can do to our
 source or to the compilers themselves to make things faster, that could be
 a huge win.

 Like dependency hell, I don't believe the 

Re: On builds getting slower

2013-08-11 Thread Nicholas Cameron
I experimented with IWYU for gfx/layers. I got decent build time improvements - 
around 12.5% (30s out of a 4min build with j1, 7s from 55s for j12) for a 
complete rebuild of gfx/layers using Clang on Linux.

The process was far from automatic. I think this is in part because the layers 
code has lots of platform specific parts and thus a lot of defines which 
sometimes trip up IWYU. In part just because IWYU is imperfect.

Patches in bug 903816.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-08 Thread Nicholas Nethercote
On Sun, Aug 4, 2013 at 11:48 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:

 Nick, when you made changes to the JS engine's #includes, did you
 observe a change in build times?

 I don't have good measurements, largely because I've been doing it in
 small chunks over time.  I'll try to do
 https://bugzilla.mozilla.org/show_bug.cgi?id=886140 in a big chunk to
 get good measurements, and I'll try to remember to report back here
 once I've done that.

I've minimized the #includes of most of the non-Ion headers.  (Ion is about
half of SpiderMonkey these days).  The action migrated to
https://bugzilla.mozilla.org/show_bug.cgi?id=902917.  A
rebuild-everything-but-ICU build of the JS shell has
dropped from ~58 seconds to ~57 seconds, and the CPU time has dropped from
~6m4s to ~5m58s.

I also have a script that analyzes how many .cpp files are affected by each .h
file.  I've seen the counts for a lot of these files drop by a small amount,
e.g. 1--5.  And for a handful, it's dropped drastically, e.g.:

mozilla/AllocPolicy:   265 -- 0
ion/AsmJSSignalHandlers.h: 242 -- 2
jsiter.h:  171 -- 102
jsworkers: 106 -- 17
jsprf:  96 -- 23
ArgumentsObject.h:  91 -- 23
Interpreter.h: 171 -- 102
ScopeObject-inl.h:  40 -- 8

The full diff is below.

So, in terms of improving build times, the results-to-effort ratio
isn't that high.  On the other hand, it does have some effect, and
it's hard to see how else to improve the C++ compile time part of the
builds without doing this.

Nick


 -- sorted by filename --
-177 .cpp files -- TraceLogging.h
-246 .cpp files -- assembler/assembler/ARMAssembler.h
-244 .cpp files -- assembler/assembler/ARMv7Assembler.h
-246 .cpp files -- assembler/assembler/AbstractMacroAssembler.h
-247 .cpp files -- assembler/assembler/AssemblerBuffer.h
-246 .cpp files -- assembler/assembler/AssemblerBufferWithConstantPool.h
-246 .cpp files -- assembler/assembler/CodeLocation.h
+176 .cpp files -- TraceLogging.h
+244 .cpp files -- assembler/assembler/ARMAssembler.h
+242 .cpp files -- assembler/assembler/ARMv7Assembler.h
+244 .cpp files -- assembler/assembler/AbstractMacroAssembler.h
+245 .cpp files -- assembler/assembler/AssemblerBuffer.h
+244 .cpp files -- assembler/assembler/AssemblerBufferWithConstantPool.h
+244 .cpp files -- assembler/assembler/CodeLocation.h
   2 .cpp files -- assembler/assembler/LinkBuffer.h
-244 .cpp files -- assembler/assembler/MIPSAssembler.h
-244 .cpp files -- assembler/assembler/MacroAssembler.h
-245 .cpp files -- assembler/assembler/MacroAssemblerARM.h
-244 .cpp files -- assembler/assembler/MacroAssemblerARMv7.h
-246 .cpp files -- assembler/assembler/MacroAssemblerCodeRef.h
-244 .cpp files -- assembler/assembler/MacroAssemblerMIPS.h
-244 .cpp files -- assembler/assembler/MacroAssemblerSparc.h
-244 .cpp files -- assembler/assembler/MacroAssemblerX86.h
-245 .cpp files -- assembler/assembler/MacroAssemblerX86Common.h
-244 .cpp files -- assembler/assembler/MacroAssemblerX86_64.h
+242 .cpp files -- assembler/assembler/MIPSAssembler.h
+242 .cpp files -- assembler/assembler/MacroAssembler.h
+243 .cpp files -- assembler/assembler/MacroAssemblerARM.h
+242 .cpp files -- assembler/assembler/MacroAssemblerARMv7.h
+244 .cpp files -- assembler/assembler/MacroAssemblerCodeRef.h
+242 .cpp files -- assembler/assembler/MacroAssemblerMIPS.h
+242 .cpp files -- assembler/assembler/MacroAssemblerSparc.h
+242 .cpp files -- assembler/assembler/MacroAssemblerX86.h
+243 .cpp files -- assembler/assembler/MacroAssemblerX86Common.h
+242 .cpp files -- assembler/assembler/MacroAssemblerX86_64.h
   1 .cpp files -- assembler/assembler/RepatchBuffer.h
-244 .cpp files -- assembler/assembler/SparcAssembler.h
-245 .cpp files -- assembler/assembler/X86Assembler.h
+242 .cpp files -- assembler/assembler/SparcAssembler.h
+243 .cpp files -- assembler/assembler/X86Assembler.h
 256 .cpp files -- assembler/jit/ExecutableAllocator.h
   1 .cpp files -- assembler/moco/MocoStubs.h
 255 .cpp files -- assembler/wtf/Assertions.h
 258 .cpp files -- assembler/wtf/Platform.h
-246 .cpp files -- assembler/wtf/SegmentedVector.h
+244 .cpp files -- assembler/wtf/SegmentedVector.h
   9 .cpp files -- assembler/wtf/VMTags.h
-  2 .cpp files -- builtin/BinaryData.h
+  1 .cpp files -- builtin/BinaryData.h
   8 .cpp files -- builtin/Eval.h
   4 .cpp files -- builtin/Intl.h
   1 .cpp files -- builtin/Iterator-inl.h
   4 .cpp files -- builtin/MapObject.h
  13 .cpp files -- builtin/Module.h
-103 .cpp files -- builtin/Object.h
- 86 .cpp files -- builtin/ParallelArray.h
+101 .cpp files -- builtin/Object.h
+ 81 .cpp files -- builtin/ParallelArray.h
   1 .cpp files -- builtin/Profilers.h
-175 .cpp files -- builtin/RegExp.h
+174 .cpp files -- builtin/RegExp.h
   4 .cpp files -- builtin/TestingFunctions.h
   2 .cpp files -- ctypes/CTypes.h
   2 .cpp files -- ctypes/Library.h
   2 .cpp files -- ctypes/typedefs.h
-254 .cpp files -- ds/BitArray.h
-242 .cpp files -- 

Re: On builds getting slower

2013-08-05 Thread Nicholas Nethercote
On Sun, Aug 4, 2013 at 10:12 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:

 I tried --enable-debug-symbols=-gsplit-dwarf in a debug build like this:

   CC='clang' CXX='clang++' ../configure --enable-debug
 --enable-debug-symbols=-gsplit-dwarf --enable-optimize='-O0'
 --enable-valgrind

 and it reduced the time from ~25 seconds to ~9 seconds.  The ~100 MB
 files shrunk down to ~32 MB.  Cool!

 Now, if only I could reduce opt link times similarly, that would be
 great.

glandium explained that --enable-debug-symbols is valid in a
--disable-debug build -- we include debug symbols by default, even in
--disable-debug builds, so that gdb works.

Using --enable-debug-symbols=-gsplit-dwarf in an opt build gets my JS
shell link time down to less than 4 seconds.  Yay!  Thanks, glandium.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-05 Thread Justin Lebar
Nick, when you made changes to the JS engine's #includes, did you
observe a change in build times?

On Sat, Aug 3, 2013 at 1:14 AM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 On Sat, Aug 3, 2013 at 5:47 PM, Mike Hommey m...@glandium.org wrote:

 One piece of the puzzle, at least in Mozilla code, is the tendency to
 #include Foo.h when class Bar contains a field of type Foo*, instead of
 leaving the include to Bar.cpp and forward declare in Bar.h. This
 certainly contributes to the conflation of the number of includes (most
 if not all includers of Bar.h don't need Foo.h, and that chains up
 pretty easily).

 I don't remember if IWYU tells about those. Does it?

 It does!  In the file X should add the following: parts it lists
 both #includes and forward declarations.

 Despite its imperfections[*] it's a pretty good tool, and you can get
 much further, much faster with it than you can manually.  If anyone
 wants to try it out, I'm happy to help with setting it up and all
 that.

 Nick

 [*] One imperfection I forgot to mention is that, although it gives me
 info on every .cpp file in SpiderMonkey, for some reason it doesn't
 give me info on every .h file, and I haven't worked out why.  This is
 frustrating, since minimizing #include statements in .h files is
 likely to have bigger benefits than minimizing them in .cpp files.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-05 Thread Nicholas Nethercote
 I fixed these in https://bugzilla.mozilla.org/show_bug.cgi?id=881579.
 Unlike the #include minimization, these don't require domain-specific
 expertise and are easy to fix.

 Did you measure a noticeable performance improvement?  I can't imagine
 that it would take too much time to include a file that's scanned and
 skipped in its entirety by the preprocessor without invoking the
 actual compiler, even without this optimization.

I just measured it carefully.  I tried adding a useless |#if 0 /
#endif| pair of lines to the top of every .h file in SpiderMonkey.  As
an example of the effect, this increased the number of times jsapi.h
was read from 228 to 3116.

However, the effect on a rebuild-everything-in-the-JS-shell-except-ICU
was minimal.  The results are a bit noisy, but it maybe reduced the
time from ~59.5s to ~58.5s.  Pretty weak tea :(

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-05 Thread Nicholas Nethercote
On Sun, Aug 4, 2013 at 11:05 PM, Justin Lebar justin.le...@gmail.com wrote:
 Nick, when you made changes to the JS engine's #includes, did you
 observe a change in build times?

I don't have good measurements, largely because I've been doing it in
small chunks over time.  I'll try to do
https://bugzilla.mozilla.org/show_bug.cgi?id=886140 in a big chunk to
get good measurements, and I'll try to remember to report back here
once I've done that.

Having said that, my gut feeling is that it won't make much difference
on clobber builds, but will make a large difference on at least some
incremental builds.  E.g. I mentioned above how vm/Stack-inl.h
previously affected 125 files and it now affects 30.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-05 Thread Boris Zbarsky

On 8/5/13 2:05 AM, Justin Lebar wrote:

Nick, when you made changes to the JS engine's #includes, did you
observe a change in build times?


Just for data, when khuey just reduced the number of includes in 
bindings code, that did in fact affect build times. 
https://bugzilla.mozilla.org/show_bug.cgi?id=887533#c8 has some numbers, 
but the upshot is that rebuilding every single binding .cpp (equivalent 
of a clobber build) went from about 5 minutes to about 3 minutes.


Which is still too darned long.  :(

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-05 Thread Boris Zbarsky

On 8/5/13 1:46 AM, Joshua Cranmer  wrote:

DOMJSProxyHandler.h [2614, #197] includes xpcpublic.h [2645, #188]
includes nsIURI [3295, #121]. DOMJSProxyHandler appears to include
xpcpublic.h solely for IsDOMProxy; xpcpublic.h appears to include nsIURI
because it uses it as an nsCOMPtr in the CompartmentStatsExtras class it
defines. The end result is that touching nsIURI will require us to
rebuild all of the DOM bindings.


Note that BindingUtils.h also includes xpcpublic.h, so even fixing the 
DOMJSProxyHandler bits wouldn't necessarily help.


On the bright side, nsIURI is almost never touched, unlike xpcpublic itself.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-04 Thread Aryeh Gregor
On Sat, Aug 3, 2013 at 6:36 AM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 Gregory suggested that headers aren't something that the build config
 group can tackle, and I agree.  Modifying #include statements en masse
 is much easier if you have some familiarity with the code.  You need a
 sense of which headers should include which others, and often you have
 to move code around.  So I encourage people to use IWYU on parts of
 the code they are familiar with.  (Aryeh did this with editor/ in
 https://bugzilla.mozilla.org/show_bug.cgi?id=772807.)

I'm not sure how much it helped, though, as I note in the bug.  It
didn't appreciably reduce clobber build times.  It might have reduced
recompile times -- but maybe not, because a lot of the removed headers
were included by one of the other included headers anyway.  It's very
nice that IWYU was able to remove the nsString.h include from
nsComposerCommands.h, but probably every .cpp that includes
nsComposerCommands.h includes nsString.h somehow anyway, in which case
it doesn't really help anything.

I should also note that I did once spend a bit of time poking at
high-profile headers to see if I could manually remove dependency on
other headers somehow, but failed.  Things like nsCOMPtr.h already
have a pretty minimal set of includes.

 SpiderMonkey had ~30 header files that violated this, most of them by
 doing something like this:

 #if !defined(jsion_baseline_frame_inl_h__)  defined(JS_ION)
 ...
 #endif

 I fixed these in https://bugzilla.mozilla.org/show_bug.cgi?id=881579.
 Unlike the #include minimization, these don't require domain-specific
 expertise and are easy to fix.

Did you measure a noticeable performance improvement?  I can't imagine
that it would take too much time to include a file that's scanned and
skipped in its entirety by the preprocessor without invoking the
actual compiler, even without this optimization.

On Sat, Aug 3, 2013 at 6:59 AM, L. David Baron dba...@dbaron.org wrote:
 This tool sounds great.  I suspect there's even more to be gained
 that it can't detect, though, from things that are used, but could
 easily be made not used.

It has an option to add comments for why headers are included, like

#include nsISupportsImpl.h// for nsPresContext::Release
#include nsISupportsUtils.h   // for NS_IF_ADDREF
#include nsIURI.h // for nsIURI
#include nsPresContext.h  // for nsPresContext
#include nscore.h // for NS_IMETHODIMP, nsresult, etc

If you're familiar with the code in the file, you should be able to
spot unneeded includes more easily with this info.  For instance,
NS_IF_ADDREF could probably be removed from
nsComposerDocumentCommands.cpp (if it hasn't been already), and then
the nsISupportUtils.h include could be removed.  nsIURI, most likely
not.

In practice, I found IWYU to be somewhat frustrating to use because of
a few key bugs and omissions (which I don't remember off the top of my
head, but filed in their bug tracker, and IIRC they weren't fixed).
It also didn't seem willing to look at header files that didn't
correspond to source files, which is a lot of the most interesting
ones, and it crashed a lot.  If it were a bit more polished and
reliable, it would be great to run automatically on every checkin,
IMO!  It will make the changes for you automatically if you give it
the right instructions (see exact command in bug 772807 comment 0).
The alphabetization and commenting are nice features on their own even
without performance improvement.  But I didn't find it to be worth it
in practice.  Which is a pity, because it was a close thing.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-04 Thread Nicholas Nethercote
On Sat, Aug 3, 2013 at 5:47 PM, Mike Hommey m...@glandium.org wrote:

 If I could speed up any part of the builds, it would be linking.
 Waiting a long time to test a one file change sucks.

 If you're on linux, you can try --enable-debug-symbols=-gsplit-dwarf.

That worked nicely.  Before I get there, some measurements...

I did some opt builds of the JS shell, configuring like this:

  CC='clang' CXX='clang++' ../configure --disable-debug --enable-optimize

Here are some of the steps that happen after the last .cpp file is compiled:

- Use |ar| to create libjs_static.a, which is 369 MB.

- Run |ranlib libjs_static.a|.

- Link libmozjs-25.0a.so, which is 102 MB.

- Link |js|, i.e. the actual shell, which is 100 MB.

- Link jsapi-tests/jsapi-tests, which is 113 MB.

- Link gdb/tests, which is 98 MB.

All this takes about 21 seconds, which isn't surprising given the size
of those files.

I tried --enable-debug-symbols=-gsplit-dwarf in a debug build like this:

  CC='clang' CXX='clang++' ../configure --enable-debug
--enable-debug-symbols=-gsplit-dwarf --enable-optimize='-O0'
--enable-valgrind

and it reduced the time from ~25 seconds to ~9 seconds.  The ~100 MB
files shrunk down to ~32 MB.  Cool!

Now, if only I could reduce opt link times similarly, that would be
great.  Also, I wonder if I can stop jsapi-tests and gdb-tests from
building...

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-04 Thread Joshua Cranmer 

On 8/2/2013 4:13 PM, Gregory Szorc wrote:

# Header dependency hell

We have hundreds of header files that are included in hundreds or even 
thousands of other C++ files. Any time one of these widely-used 
headers changes, the object files get invalidated by the build system 
dependencies and we have to re-invoke the compiler. This also likely 
invalidates ccache, so it's just like a clobber build.


No matter what we do to the build backend to make clobber builds 
faster, header dependency hell will continue to undermine this 
progress for dependency builds.


I don't believe the build config group is in a position to tackle 
header dependency hell at this time. We are receptive to good ideas 
and will work with people to land patches. Perhaps an ad-hoc group of 
Platform developers can band together to address this?


As a simple hacking project, I put together a simple python script that 
analyzed all the #include's of everything and then sorted headers by the 
number of files [anything that's not a .h file] that include them 
transitively, which is a rough estimate of how many files would be 
recompiled if that header changed. The full list is uploaded to bug 
901132. A brief highlight of results that may surprise you:


The most included header file is prcpucfg.h, followed closely by prtypes.h.
The most included non-NSPR file is mfbt/Compiler.h at #11, followed by 7 
headers terminating in Atomics.h.

#18 is prlog.h
#19 is a three-way tie between mozalloc.h, xpcom-config.h and fallible.h
#23 is xpcom's nsError.h and the other related error headers [tie]
#27 is the venerable nscore.h
#40 is nsrootidl.idl/nsISupports.idl
The first JS header is #86
#115 is jsapi.h
#120 is nsIURI.idl, the first header not coming from something you'd 
expect to be included in most places

#121 is nsUnicharUtils.h
#164 is nsWrapperCache.h
#165 is a tie between PSpdyPush3.h and nsILoadGroup.idl
... and at this point the list starts including lots of things that's 
not from XPCOM, MFBT, JS, mozalloc, or NSPR.


There are 463 headers included by more than 1000 files, 344 by more than 
2000, 140 by more than 3000, 80 by more than 4000, 21 by more than 5000, 
and 4 by more than 6000. There are also 765 headers included by more 
than 500 files, and 1,563 by more than 100. Note that there are 11,432 
files overall by this count, so 13% of our header files (there are .c 
files in that mix, so I wouldn't call them all header files) would 
require rebuilding at least 100 files.


Also, note that this is generated from a debug comm-central build. Since 
nsDebug.h [#29] includes prprf.h only in debug builds, and comm-central 
includes the LDAP C-SDKs which also interact with NSS and NSPR, the NSPR 
numbers in particular are slightly inflated from an opt mozilla-central 
build.


[Note: these stats differ slightly from the list I posted, since I 
realized that my script was including .idl files in the proxy for 
terminal files. There was surprisingly little churn among the top 
several files when I fixed that.]


Take aways:
1. Ehsan's dieprtypesdie bug needs to aggressively prune NSPR includes 
from XPCOM and JS headers to have any hope of working.
2. IDL files should ideally only need to include a single file: the one 
that the interface is a base of. Everything else should be a forward 
declaration.
3. There is probably not much gains in trying to trim includes in XPCOM 
or MFBT: many of them will get included anyways by virtues of being the 
core datatype.
4. Public header files that get really included need to trim down their 
includes to only the ones they absolutely need *in the header*. Some of 
the heavy-included stuff seems to be coming from an unexpectedly-common 
header file including other stuff:
DOMJSProxyHandler.h [2614, #197] includes xpcpublic.h [2645, #188] 
includes nsIURI [3295, #121]. DOMJSProxyHandler appears to include 
xpcpublic.h solely for IsDOMProxy; xpcpublic.h appears to include nsIURI 
because it uses it as an nsCOMPtr in the CompartmentStatsExtras class it 
defines. The end result is that touching nsIURI will require us to 
rebuild all of the DOM bindings.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-03 Thread Mike Hommey
On Sat, Aug 03, 2013 at 01:36:15PM +1000, Nicholas Nethercote wrote:
(...)
 Even worse, link times are through the roof.  I was thrilled when, two
 years ago, I switched from ld to gold and linking time plummeted.  The
 first link after rebooting was always slow, but I could link libxul
 back then in about 9 seconds.  I haven't measured recently but I'm
 certain it's now *much* higher.  Even the JS shell, which used to take
 hardly any time to link, now takes 10s or more;  enough that I often
 switch to doing something else while waiting.

The JS engine, having grown to be a lot of C++ instead of being mainly
C, also saw its debugging info (DWARF) grow significantly. FWIW, the
debug info .so for libmozjs.so at the time of Firefox 15 was 69MB on
Linux64, it is 129MB in Firefox 22. More debug info means link time
increase, and suggests build increase too (if debug info is bigger, it
means either code is bigger, or code is more complex ; or both)

 If I could speed up any part of the builds, it would be linking.
 Waiting a long time to test a one file change sucks.

If you're on linux, you can try --enable-debug-symbols=-gsplit-dwarf.
 
(...)

 IWYU tells you the #includes that are unnecessary;  it also tells you
 which ones are missing, i.e. which ones are being #included indirectly
 through another header.  I've only bothered removing #includes because
 adding the missing ones doesn't feel worthwhile.  Sometimes this means
 that when you remove an unnecessary |#include a.h|, you have to add
 a |#include b.h| because b.h was being pulled in only via a.h.  Not
 a big deal.

(...)

One piece of the puzzle, at least in Mozilla code, is the tendency to
#include Foo.h when class Bar contains a field of type Foo*, instead of
leaving the include to Bar.cpp and forward declare in Bar.h. This
certainly contributes to the conflation of the number of includes (most
if not all includers of Bar.h don't need Foo.h, and that chains up
pretty easily).

I don't remember if IWYU tells about those. Does it?

(...)
 
 (Another annoying Linux thing is that we have these Mozilla-specific
 $OBJDIR/dist/system_wrappers_js/*.h that just #include a system
 header.  I don't entirely understand what they're for, but they don't
 have a #ifndef wrapper and so also get included many times per .cpp
 file.  I tried adding a #ifndef wrapper but got bustage I didn't
 understand.  At least these files are tiny.)

They are there to reset visibility to default before including the real
header and reset it to hidden when returning from the include.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-03 Thread Nicholas Nethercote
On Sat, Aug 3, 2013 at 5:47 PM, Mike Hommey m...@glandium.org wrote:

 One piece of the puzzle, at least in Mozilla code, is the tendency to
 #include Foo.h when class Bar contains a field of type Foo*, instead of
 leaving the include to Bar.cpp and forward declare in Bar.h. This
 certainly contributes to the conflation of the number of includes (most
 if not all includers of Bar.h don't need Foo.h, and that chains up
 pretty easily).

 I don't remember if IWYU tells about those. Does it?

It does!  In the file X should add the following: parts it lists
both #includes and forward declarations.

Despite its imperfections[*] it's a pretty good tool, and you can get
much further, much faster with it than you can manually.  If anyone
wants to try it out, I'm happy to help with setting it up and all
that.

Nick

[*] One imperfection I forgot to mention is that, although it gives me
info on every .cpp file in SpiderMonkey, for some reason it doesn't
give me info on every .h file, and I haven't worked out why.  This is
frustrating, since minimizing #include statements in .h files is
likely to have bigger benefits than minimizing them in .cpp files.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-03 Thread Mike Hommey
On Sat, Aug 03, 2013 at 04:47:29PM +0900, Mike Hommey wrote:
 On Sat, Aug 03, 2013 at 01:36:15PM +1000, Nicholas Nethercote wrote:
 (...)
  Even worse, link times are through the roof.  I was thrilled when, two
  years ago, I switched from ld to gold and linking time plummeted.  The
  first link after rebooting was always slow, but I could link libxul
  back then in about 9 seconds.  I haven't measured recently but I'm
  certain it's now *much* higher.  Even the JS shell, which used to take
  hardly any time to link, now takes 10s or more;  enough that I often
  switch to doing something else while waiting.
 
 The JS engine, having grown to be a lot of C++ instead of being mainly
 C, also saw its debugging info (DWARF) grow significantly. FWIW, the
 debug info .so for libmozjs.so at the time of Firefox 15 was 69MB on
 Linux64, it is 129MB in Firefox 22. More debug info means link time
 increase, and suggests build increase too (if debug info is bigger, it
 means either code is bigger, or code is more complex ; or both)
 
  If I could speed up any part of the builds, it would be linking.
  Waiting a long time to test a one file change sucks.
 
 If you're on linux, you can try --enable-debug-symbols=-gsplit-dwarf.

For those who would want to try it at home, note this doesn't work well
with ccache:
https://bugzilla.samba.org/show_bug.cgi?id=10005

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


On builds getting slower

2013-08-02 Thread Gregory Szorc

(Cross posting. Please reply to dev.builds.)

I've noticed an increase in the number of complaints about the build 
system recently. I'm not surprised. Building mozilla-central has gotten 
noticeably slower. More on that below. But first, a request.


Many of the complaints I've heard have been from overhearing hallway 
conversations, noticing non-directed complaints on IRC, having 3rd 
parties report anecdotes, etc. *Please, please, please voice your 
complaints directly at me and the build peers.* Indirectly complaining 
isn't a very effective way to get attention or to spur action. I 
recommend posting to dev.builds so complaints and responses are public 
and easily archived. If you want a more personal conversation, just get 
in contact with me and I'll be happy to explain things.


Anyway, on to the concerns.

Builds are getting slower. http://brasstacks.mozilla.com/gofaster/#/ has 
high-level trends for our automation infrastructure. I've also noticed 
my personal machines taking ~2x longer than they did 2 years ago. 
Unfortunately, I can't give you a precise breakdown over where the 
increases have been because we don't do a very good job of recording 
these things. This is one reason why we have better monitoring on our Q3 
goals list.


Now, on to the reasons why builds are getting slower.

# We're adding new code at a significant rate.

Here is a breakdown of source file types in the tree by Gecko version. 
These are file types that are directly compiled or go through code 
generation to create a compiled file.


Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231 
WebIDL
Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441 
WebIDL


That nets totals of:

7: 7418
14: 8667
21: 10875
25: 11872

As you can see, we're steadily adding new source code files to the tree. 
mozilla-central today has 60% more source files than Gecko 7! If you 
assume number of source files is a rough approximation for compile time, 
it's obvious why builds are getting slower: we're building more.


As large new browser features like WebRTC and the ECMAScript 
Internationalization API continue to dump hundreds of new source files 
in the tree, build times will increase. There's nothing we can do about 
this short of freezing browser features. That's not going to happen.


# Header dependency hell

We have hundreds of header files that are included in hundreds or even 
thousands of other C++ files. Any time one of these widely-used headers 
changes, the object files get invalidated by the build system 
dependencies and we have to re-invoke the compiler. This also likely 
invalidates ccache, so it's just like a clobber build.


No matter what we do to the build backend to make clobber builds faster, 
header dependency hell will continue to undermine this progress for 
dependency builds.


I don't believe the build config group is in a position to tackle header 
dependency hell at this time. We are receptive to good ideas and will 
work with people to land patches. Perhaps an ad-hoc group of Platform 
developers can band together to address this?


# Increased reliance on C++ language features

I *suspect* that our increased reliance on C++ language features such as 
templates and new C++11 features is contributing to slower build times. 
It's been long known that templates and other advanced language features 
can blow up the compiler if used in certain ways. I also suspect that 
modern C++11 features haven't been optimized to the extent years-old C++ 
features have been. Combine this with the fact compilers are working 
harder than ever to optimize code and it wouldn't surprise me if a CPU 
cycle invested in the compiler isn't giving the returns it used to.


I would absolutely love for a compiler wizard to sit down and profile 
Gecko C++ in Clang, GCC, and MSVC. If there are things we can do to our 
source or to the compilers themselves to make things faster, that could 
be a huge win.


Like dependency hell, I don't believe the build config group will tackle 
this any time soon.


# Clobbers are more frequent and more annoying

Clobbers are annoying. It annoys me every time I see the CLOBBER file 
has been updated. I won't make excuses for open bugs on known 
required-clobber issues: we should fix them all.


I suspect clobbers have become more annoying in recent months because 
overall build times have increased. If builds only took 5 minutes, I'm 
not sure the cries would be as loud. That's no excuse for not fixing it, 
however. Please continue to loudly complain every time there is a clobber.


# Slowness Summary

There are many factors contributing to making the build system slower. I 
would argue that the primary contributors are not within the control of 
the build config group. Instead, the fault lives with all the compiled 
code (mainly 

Re: On builds getting slower

2013-08-02 Thread Ehsan Akhgari
First of all, I'd like to thank you and the rest of the build peers for 
your tireless efforts!


On 2013-08-02 5:13 PM, Gregory Szorc wrote:

(Cross posting. Please reply to dev.builds.)


Sorry, but cross-posting to both lists.  I don't think most of the 
people interested in this conversation are on dev.builds (I am, FWIW.)



I've noticed an increase in the number of complaints about the build
system recently. I'm not surprised. Building mozilla-central has gotten
noticeably slower. More on that below. But first, a request.

Many of the complaints I've heard have been from overhearing hallway
conversations, noticing non-directed complaints on IRC, having 3rd
parties report anecdotes, etc. *Please, please, please voice your
complaints directly at me and the build peers.* Indirectly complaining
isn't a very effective way to get attention or to spur action. I
recommend posting to dev.builds so complaints and responses are public
and easily archived. If you want a more personal conversation, just get
in contact with me and I'll be happy to explain things.


This is fair, but really the builds getting slower is so obvious that I 
would be surprised if none of the build config peers have noticed it in 
their daily work.  :-)



Builds are getting slower. http://brasstacks.mozilla.com/gofaster/#/ has
high-level trends for our automation infrastructure. I've also noticed
my personal machines taking ~2x longer than they did 2 years ago.
Unfortunately, I can't give you a precise breakdown over where the
increases have been because we don't do a very good job of recording
these things. This is one reason why we have better monitoring on our Q3
goals list.


My anecdotal evidence also matches the 2x slower metric.


Now, on to the reasons why builds are getting slower.

# We're adding new code at a significant rate.

Here is a breakdown of source file types in the tree by Gecko version.
These are file types that are directly compiled or go through code
generation to create a compiled file.

Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231
WebIDL
Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441
WebIDL

That nets totals of:

7: 7418
14: 8667
21: 10875
25: 11872

As you can see, we're steadily adding new source code files to the tree.
mozilla-central today has 60% more source files than Gecko 7! If you
assume number of source files is a rough approximation for compile time,
it's obvious why builds are getting slower: we're building more.

As large new browser features like WebRTC and the ECMAScript
Internationalization API continue to dump hundreds of new source files
in the tree, build times will increase. There's nothing we can do about
this short of freezing browser features. That's not going to happen.


Hmm.  I'm not sure if the number of source files is directly correlated 
to build times, but yeah there's clearly a trend here!



# Header dependency hell

We have hundreds of header files that are included in hundreds or even
thousands of other C++ files. Any time one of these widely-used headers
changes, the object files get invalidated by the build system
dependencies and we have to re-invoke the compiler. This also likely
invalidates ccache, so it's just like a clobber build.

No matter what we do to the build backend to make clobber builds faster,
header dependency hell will continue to undermine this progress for
dependency builds.

I don't believe the build config group is in a position to tackle header
dependency hell at this time. We are receptive to good ideas and will
work with people to land patches. Perhaps an ad-hoc group of Platform
developers can band together to address this?


I have been playing with an idea in my head about this.  What if we had 
a list of the most popular headers in our tree, and we looked through 
them and tried to cut down the number of #includes in the headers?  That 
should help create more isolated sub-graphs and hopefully help with 
breaking the most severe dependency chains.


Writing a tool to spit out this information should be fairly easy.


# Increased reliance on C++ language features

I *suspect* that our increased reliance on C++ language features such as
templates and new C++11 features is contributing to slower build times.
It's been long known that templates and other advanced language features
can blow up the compiler if used in certain ways. I also suspect that
modern C++11 features haven't been optimized to the extent years-old C++
features have been. Combine this with the fact compilers are working
harder than ever to optimize code and it wouldn't surprise me if a CPU
cycle invested in the compiler isn't giving the returns it used to.

I would absolutely love for a compiler wizard to sit down and profile
Gecko C++ in Clang, GCC, and MSVC. If there are things we can do

Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 3:38 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 # Building faster

 One of our Q3 goals is to replace the export tier with something more
 efficient. More on tiers at [1]. This should make builds faster,
 especially on pymake. Just earlier this week we made WebIDL and XPIDL
 code generation concurrent. Before, they executed serially, failing to
 utilize multiple CPU cores. Next steps are XPIDL code gen, installing
 headers, and preprocessing. This is all tracked in bug 892644.


 Out of curiosity, why was the export tier the fist target for this?  I may
 lack context here, but the slowest tier that we have is the platform libs
 tier.  Wouldn't focusing on that have given us the biggest possible bang
 for the buck?


Tier is the wrong term here[0].  I think it would be more correct to say
that we're removing the export phase.  Our build system currently visits
every[1] directory 3 times, once to build the 'export' target, once to
build the 'libs' target, and once to build the 'tools' target.  Tiers are
groupings of directories.  The build system guarantees that every directory
in a given tier has export, libs, and tools targets processed before doing
anything in the following tier.  The goal is to remove the export phase
across all tiers and replace it with a dedicated 'precompile' tier for the
things that need to be done before compiling C++/etc in the libs phase
(such as WebIDL/IPDL code generation, XPIDL header generation, putting
headers in dist/include, etc).

- Kyle

[0] at least in the sense that our build system has used it in the past.
[1] this isn't strictly true (e.g. TOOL_DIRS) but is close enough for the
purposes of this conversation.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Gregory Szorc

On 8/2/13 3:38 PM, Ehsan Akhgari wrote:

Hmm.  I'm not sure if the number of source files is directly correlated
to build times, but yeah there's clearly a trend here!


I concede a lines of code count would be a better indicator. I'm lazy.


# Header dependency hell

I have been playing with an idea in my head about this.  What if we had
a list of the most popular headers in our tree, and we looked through
them and tried to cut down the number of #includes in the headers?  That
should help create more isolated sub-graphs and hopefully help with
breaking the most severe dependency chains.

Writing a tool to spit out this information should be fairly easy.


I'll try to get a tool in the tree for people to run. 
https://bugzilla.mozilla.org/show_bug.cgi?id=901132



# Increased reliance on C++ language features

But I'm not convinced at all about the C++11 features contributing to
this.  I cannot think of any reason at all why that should be the case
for the things that we've started to use.  Do you have any evidence to
implicate some of those features?


No. Just my general distrust of new/young vs mature software.


# Clobbers are more frequent and more annoying

This should be relatively easy to address (compared to the other
things that we can do, of course).  I assert that every time we touch
the CLOBBER file, it's because the build system could not figure out the
dependencies properly.  Fortunately we can easily log the CLOBBER file
and go back in time and find all of the patches that included CLOBBER
modifications and debug the build dependency issues.  Has there been any
effort to address these issues by looking at the testcases that we have
in form of patches?


To some degree, yes. https://bugzilla.mozilla.org/show_bug.cgi?id=890744 
is a good example. Vacation schedules didn't align for quick action. 
There may also be a pymake bug or two involved.


Also, you could say people have been touching CLOBBER prematurely. I 
know there are a few cases where CLOBBER was touched in hopes it fixed a 
problem, didn't, and the commit history was left with a changeset that 
changed CLOBBER.



# Slowness Summary

Every time that we don't utilize 100% of our cores during the build
process, that's an unnecessary slowdown.  That consistently wastes a lot
of time during every build, and it also means that we can't address this
by getting more powerful machines.  :(


Right. If you plot CPU usage vs time, we can make the build faster by 
filling out the box and using 100% of all cores or by decreasing the 
total number of required CPU cycles to build. We have chosen to focus 
mostly on the former because optimizing build actions can be a lot of 
work. We've got lucky in some cases (e.g. WebIDLs in bug 861587). I fear 
compiling C++ will be much harder. I'm hoping PCH and fixing dependency 
hell are medium-hanging fruits.


I also have measurements that show we peak out at certain concurrency 
levels. The trend in CPUs is towards more cores, not higher clock speed. 
So focusing on effective core usage will continue to be important. 
Derecursifying the build will allow us to use more cores because make 
won't be starved during directory traversal. Remember, concurrent make 
only works within the same directory or for directories under 
PARALLEL_DIRS. Different top-level directories during tier traversal 
(e.g. dom and xpcom) are executed sequentially.



# Building faster

One of our Q3 goals is to replace the export tier with something more
efficient. More on tiers at [1]. This should make builds faster,
especially on pymake. Just earlier this week we made WebIDL and XPIDL
code generation concurrent. Before, they executed serially, failing to
utilize multiple CPU cores. Next steps are XPIDL code gen, installing
headers, and preprocessing. This is all tracked in bug 892644.


Out of curiosity, why was the export tier the fist target for this?  I
may lack context here, but the slowest tier that we have is the platform
libs tier.  Wouldn't focusing on that have given us the biggest possible
bang for the buck?


Making platform libs faster will without a doubt have the biggest 
impact. We chose to start with export first for a few reasons.


First, it's simple. We had to start somewhere. platform/libs is a 
magnitude more complex. We are making major refactorings in export 
already and we felt it best to prove out concepts with export rather 
that going for the hardest problem first.


Second, export is mostly standalone targets. We would like to port the 
build backend bottom up instead of top down so we can make the 
dependencies right from the beginning. If we started with platform/lib, 
we'd have to hack something together now and revamp it with proper 
dependencies later.


Third, export is horribly inefficient. pymake spends an absurd amount of 
time traversing directories, parsing make files and doing very little 
for each directory in the export tier. Platform, by contrast, tends to 
have longer-running jobs 

Re: On builds getting slower

2013-08-02 Thread Gregory Szorc

On 8/2/13 4:43 PM, Robert O'Callahan wrote:

Nathan has just made an excellent post on this topic:
https://blog.mozilla.org/nfroyd/2013/08/02/i-got-99-problems-and-compilation-time-is-one-of-them/

It would be interesting to measure the number of non-blank precompiled
lines in each build, over time. This is probably going up faster than
the number of overall source lines, possibly explaining why build times
increase faster than just the increasing size of the code.

Greg, I assume the build team has data on where time is spent in various
phases of the build today. Can you point us to that data? Especially
valuable if you have data over several releases.


1) Pull my patch queue from 
https://hg.mozilla.org/users/gszorc_mozilla.com/gecko-patches/

2) Apply the build-resource-monitor and build-resources-display patches
3) $ mach build
4) $ mach build-resource-usage

The raw data is saved to objdir/.mozbuild/build_resources.json. It 
contains CPU, memory, and I/O measurements for every second during the 
build along with timing information for the different tiers, subtiers, 
and directories.


Currently, the HTML display is kinda crap. It only displays CPU and it 
looks horrible. I'm not a professional web developer! The main goal with 
the initial patch is to have data collection so we can do nice things 
with it later.


Also, I haven't tested on Windows in a while. You also need psutil to be 
able to capture the data. psutil is currently optional in our build 
system. To test if you have it, run |mach python| and try to |import 
psutil|. And, it likely won't work with a fresh source checkout because 
psutil is built in configure and mach invokes configure, so there's a 
chicken and egg problem. That's pretty much why this hasn't landed yet. 
Yeah, I need to land this. It's on my Q3 goals list. Bug 883209 tracks.


Unfortunately we don't have this granular data for the past and likely 
never will unless someone wants to rebase and take a bunch of 
measurements. We do have old buildbot logs, but those aren't too useful 
without timestamps on each line (this is one reason mach prefixes times 
on each line - and yes, there needs to be an option to disable that).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Nicholas Nethercote
 Building mozilla-central has gotten noticeably slower.

Yep.  A bit over two years ago I started doing frequent browser builds
for the first time;  previously I'd mostly just worked with the JS
shell.  I was horrified by the ~25 minutes it took for a clobber
build.  I got a new Linux64 box and build times dropped to ~12
minutes, which made a *large* difference to my productivity.

On that same machine, I'm now back to ~25 minutes again.  I've assumed
it's due to more code, specifically:

1. more code in the repository;
2. more code generated explicitly (e.g. dom bindings);
3. more code generated implicitly (i.e. templates).

I don't know the relative impacts, though 1 is clearly a big part of it.

Even worse, link times are through the roof.  I was thrilled when, two
years ago, I switched from ld to gold and linking time plummeted.  The
first link after rebooting was always slow, but I could link libxul
back then in about 9 seconds.  I haven't measured recently but I'm
certain it's now *much* higher.  Even the JS shell, which used to take
hardly any time to link, now takes 10s or more;  enough that I often
switch to doing something else while waiting.

If I could speed up any part of the builds, it would be linking.
Waiting a long time to test a one file change sucks.


 # Header dependency hell

I've recently done a bunch of work on improving the header situation
in SpiderMonkey.  I can break it down to two main areas.

== MINIMIZING #include STATEMENTS ==

There's a clang tool called include-what-you-use, a.k.a. IWYU
(http://code.google.com/p/include-what-you-use/).  It tells you
exactly which headers should be included in all your files.  I've used
it to minimize #includes somewhat already
(https://bugzilla.mozilla.org/show_bug.cgi?id=634839) and I plan to do
some more Real Soon Now
(https://bugzilla.mozilla.org/show_bug.cgi?id=888768).  There are
still a couple of hundred unnecessary #include statements in
SpiderMonkey.  (BTW, SpiderMonkey has ~280 .cpp files and ~370 .h
files.)

IWYU is great, because it's really hard to figure this stuff out
manually.  It's also not perfect;  about 5% of its suggestions are
simply wrong, i.e. it says you can remove a #include that you can't.
Also, there are often project-specific idioms that it doesn't know
about -- there were several, but the one I remember off the top of my
head is that it was constantly suggesting I remove
mozilla/StandardInteger.h and add stdint.h (thankfully that's not
an issue any more :)  There are pragmas that you can annotate your
source with, but I found them to be not always work as advertised and
not really worth the effort.  Although IWYU basically works, it feels
a bit like software that doesn't get much maintenance.

I haven't been doing rigorous measurements, but I think that these
IWYU-related improvements don't do much for clobber builds, but can
help significantly with partial rebuilds.  It also just feels good to
make these improvements.

IWYU tells you the #includes that are unnecessary;  it also tells you
which ones are missing, i.e. which ones are being #included indirectly
through another header.  I've only bothered removing #includes because
adding the missing ones doesn't feel worthwhile.  Sometimes this means
that when you remove an unnecessary |#include a.h|, you have to add
a |#include b.h| because b.h was being pulled in only via a.h.  Not
a big deal.

Relatedly, jorendorff wrote a python script that identifies cycles in
header dependencies and diagnosed a cycle in SpiderMonkey that
involved *11* header files.  He and I broke that cycle in a series of
29 patches in
https://bugzilla.mozilla.org/show_bug.cgi?id=872416,
https://bugzilla.mozilla.org/show_bug.cgi?id=879831
https://bugzilla.mozilla.org/show_bug.cgi?id=886205.  Prior to the
last 9 patches, if you touched vm/Stack-inl.h and rebuilt, you'd
rebuild 125 .cpp files.  After these patches landed, it dropped to 30.
 The cycle-detection script has been incorporated into the |make
check-style| target that is about to land in
https://bugzilla.mozilla.org/show_bug.cgi?id=880088.

I've also done various bits of refactoring with an eye towards
simplifying the header dependencies.
https://bugzilla.mozilla.org/show_bug.cgi?id=880041 is one example.
These kinds of things can interact well with IWYU -- you do a
clean-up, then run IWYU to find all the #includes that are no longer
necessary.

Gregory suggested that headers aren't something that the build config
group can tackle, and I agree.  Modifying #include statements en masse
is much easier if you have some familiarity with the code.  You need a
sense of which headers should include which others, and often you have
to move code around.  So I encourage people to use IWYU on parts of
the code they are familiar with.  (Aryeh did this with editor/ in
https://bugzilla.mozilla.org/show_bug.cgi?id=772807.)

I should also note that this work is pretty tedious.  There's lots of
waiting for compilation, lots of try server runs to 

Re: On builds getting slower

2013-08-02 Thread L. David Baron
On Saturday 2013-08-03 13:36 +1000, Nicholas Nethercote wrote:
  # Header dependency hell
 
 I've recently done a bunch of work on improving the header situation
 in SpiderMonkey.  I can break it down to two main areas.
 
 == MINIMIZING #include STATEMENTS ==
 
 There's a clang tool called include-what-you-use, a.k.a. IWYU
 (http://code.google.com/p/include-what-you-use/).  It tells you
 exactly which headers should be included in all your files.  I've used
 it to minimize #includes somewhat already
 (https://bugzilla.mozilla.org/show_bug.cgi?id=634839) and I plan to do
 some more Real Soon Now
 (https://bugzilla.mozilla.org/show_bug.cgi?id=888768).  There are
 still a couple of hundred unnecessary #include statements in
 SpiderMonkey.  (BTW, SpiderMonkey has ~280 .cpp files and ~370 .h
 files.)

This tool sounds great.  I suspect there's even more to be gained
that it can't detect, though, from things that are used, but could
easily be made not used.

I did a few passes of poking through .deps/*.pp files, and looking
for things I thought didn't belong.  It's been a while, though.
(See bug 64023.)

khuey was also recently working on something to reduce some pretty
bad #include fanout related to the new DOM bindings generation.
(I'm not sure if it's landed.)

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 8:59 PM, L. David Baron dba...@dbaron.org wrote:

 khuey was also recently working on something to reduce some pretty
 bad #include fanout related to the new DOM bindings generation.
 (I'm not sure if it's landed.)


That was bug 887553.  I'll land it on Monday.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-02 Thread Kyle Huey
On Fri, Aug 2, 2013 at 9:12 PM, Kyle Huey m...@kylehuey.com wrote:

 On Fri, Aug 2, 2013 at 8:59 PM, L. David Baron dba...@dbaron.org wrote:

 khuey was also recently working on something to reduce some pretty
 bad #include fanout related to the new DOM bindings generation.
 (I'm not sure if it's landed.)


 That was bug 887553.  I'll land it on Monday.


Bah, I meant bug 887533.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform