Re: Analyze C++/compiler usage and code stats easily

2013-11-16 Thread Gregory Szorc
I updated the patch to be a little bit more robust. New version at 
http://hg.gregoryszorc.com/gecko-collab/rev/6af8ba812e82


I also put together some lists of compiler CPU time per directory:

https://gps.pastebin.mozilla.org/3613688
https://gps.pastebin.mozilla.org/3620053

In terms of percent:

https://gps.pastebin.mozilla.org/3620075
https://gps.pastebin.mozilla.org/3620086

On 11/14/13, 11:43 PM, Gregory Szorc wrote:

C++ developers,

Over 90% of the CPU time required to build the tree is spent compiling
or linking C/C++. So, anything we can do to make that faster will make
the overall build faster.

I put together a quick patch [1] to make it rather simple to extract
compiler resource usage and very basic code metrics during builds.

Simply apply that patch and build with `mach build --profile-compiler`
and your machine will produce all kinds of potentially interesting
measurements. They will be stuffed into objdir/.mozbuild/compilerprof/.
If you don't feel like waiting (it will take about 5x longer than a
regular build because it performs separate preprocessing, ast, and
codegen compiler invocations 3 times each), grab an archive of an OS X
build I just performed from [2] and extract it to objdir/.mozbuild/.

I put together an extremely simple `mach compiler-analyze` command to
sift through the results. e.g.

$ mach compiler-analyze preprocessor-relevant-lines
$ mach compiler-analyze codegen-sizes
$ mach compiler-analyze codegen-total-times

Just run `mach help compiler-analyze` to see the full list of what can
be printed. Or, write your own code to analyze the produced JSON files.

I'm sure people who love getting down and dirty with C++ will be
interested in this data. I have no doubt that are compiler time and code
size wins waiting to be discovered through this data. We may even
uncover a perf issue or two. Who knows.

Here are some questions I have after casually looking at the data:

* The mean ratio of .o size to lines from preprocessor is 16.35
bytes/line. Why do 38/4916 (0.8%) files have a ratio over 100? Why are a
lot of these in js/ and gfx?

* What's in the 150 files that have 100k+ lines after preprocessing that
makes them so large?

* Why does lots of js/'s source code gravitate towards the "bad" extreme
for most of the metrics (code size, compiler time, preprocessor size)?

Disclaimer: This patch is currently hacked together. If there is an
interest in getting this checked into the tree, I can clean it up and do
it. Just file a bug in Core :: Build Config and I'll make it happen when
I have time. Or, if an interested party wants to champion getting it
landed, I'll happily hand it off :)

[1] http://hg.gregoryszorc.com/gecko-collab/rev/741f3074e313
[2] https://people.mozilla.org/~gszorc/compiler_profiles.tar.bz2
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Analyze C++/compiler usage and code stats easily

2013-11-16 Thread Terrence Cole
On 11/15/2013 05:37 PM, Gregory Szorc wrote:
> On 11/15/13, 12:26 PM, Terrence Cole wrote:
>> The problem this mess is solving, at least in the GC, is that gecko
>> needs to be able to inline barriers, UnmarkGray, and misc other stuff.
>> Whenever we accidentally out-of-line anything in these paths we see a
>> tremendous slowdown on dromaeo_dom and dromaeo_css, so we do have strong
>> evidence that this optimization is critical. We hide as many of the
>> internal GC details as possible behind magic number offsets, void*, and
>> shadow structs, but this still -- particularly combined with templates
>> -- leaves us with a tremendous amount of code in headers.
>>
>> I think our best hope for improving compilation time with these
>> limitations is C++ modules. It wouldn't really improve the code layout
>> mess, but would at least mean we only have to compile the massive pile
>> of inlined code once.
> 
> Could you compile parts of SpiderMonkey in unified mode?

Would be very useful for me, but I don't think I'm the common case for
SpiderMonkey hackers.

I actually had great results playing with clang's pre-compiled headers:
30+% speedup on compilation of jsgc.cpp. Getting it production ready
looked like it would be a major headache though, so I didn't push
forward with it.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: build fail @ "xulrunner-sdk/include/mozilla/Atomics.h" when using xulrunner-sdk >= v25.0, on linux/64 with gcc 4.7x/4.8x

2013-11-16 Thread opsdmt
iiuc, any app built against new xulrunner will need to be aware of this.

@ a downstream app, a dev suggested

"Sounds like something that xulrunner should provide in there .pc file. So that 
when pkg-config --cflags xulrunner is run (which configure does do). This value 
should come out."

sounds reasonable.

checking in xulrunner-sdk/, only an 'nspr' pkg-config is available,

find . -name *\.pc
  ./lib/pkgconfig/nspr.pc
  ./sdk/lib/pkgconfig/nspr.pc

nothing for xulrunner itself, i.e., no mention/suggestion of the "-std=gnu++0x" 
flag.

in any case, pkg-config provides CFLAGS, not CXXFLAGS ...

EVENTUALLY, this'll get dealt with in distros' gcc.

until then / for now, can/should this be flagged from within xulrunner 
code/config? or at the end-use app's configure stage?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to reduce the time m-i is closed?

2013-11-16 Thread smaug

Hi all,


the recent OOM cases have been really annoying. They have slowed down 
development, even for those who
haven't been dealing with the actual issue(s).

Could we handle this kind of cases differently. Perhaps clone the bad state of 
m-i to
some other repository we're tracking using tbpl, backout stuff from m-i to the 
state where we can
run it, re-open it and do the fixes in the clone.
And then, say in a week, merge the clone back to m-i. If the state is still bad 
(no one has step up to fix the
issues), then keep m-i closed until the issues have been fixed.


thoughts?


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-16 Thread Gabriele Svelto

On 14/11/2013 23:49, Ehsan Akhgari wrote:

The advantage of this is that it speeds up the compilation (I've measured
between 6-15x speed improvement depending on the code in question
locally.)


Another advantage would be that for mostly self-contained modules we 
could get most of the benefits of LTO without having to recur to LTO as 
the compiler would have visibility over all the sources as a single 
compilation-unit.


In turn this would also allow us to remove a lot of inline methods that 
we only have in the headers for performance reasons. As it was pointed 
out in the "Analyze C++/compiler usage and code stats easily" this could 
bring significant compilation time improvements too.


 Gabriele
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-16 Thread Ms2ger

On 11/14/2013 11:49 PM, Ehsan Akhgari wrote:

I've started to work on a project in my spare time to switch us to use
unified builds for C/C++ compilation.  The way that unified builds work is
by using the UNIFIED_SOURCES instead of the SOURCES variable in moz.build
files.  With that, the build system creates files such as:

// Unified_cpp_path_0.cpp
#include "Source1.cpp"
#include "Source2.cpp"
// ...

And compiles them instead of the individual source files.


One issue I only thought of today: this means that Source2.cpp can use 
all headers included into Source1.cpp–until enough files are added 
between Source1 and Source2 that Source2 ends up in the next unified 
file. This hasn't been much of an issue for autogenerated C++, but it 
seems like it could be harder to get right with hand-written C++.


One way around it would be not to unify sources in automation. On one 
hand, this could cause more bustage when changes that built locally turn 
out not to have enough includes; on the other, it might be better than 
having to fix up a dozen unrelated files whenever you add a new one.


Thoughts?
Ms2ger

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform