On Fri, Mar 17, 2017, at 02:20 PM, Ben Kelly wrote:
> On Fri, Mar 17, 2017 at 1:36 PM, Ted Mielczarek <t...@mielczarek.org>
> wrote:
> 
> > Back to the original topic, I recently set up a fresh Windows machine
> > and I followed the same basic steps (enable performance power mode,
> > whitelist a bunch of stuff in Windows Defender) and my build seemed
> > basically CPU-bound[1] during the compile tier. Disabling realtime
> > protection in Defender made it *slightly* better[2] but didn't have a
> > large impact on the overall build time (something like 20s out of ~14m
> > total for a clobber).
> >
> 
> The 14min measurement must have been for a partial build.  With defender
> disabled the best I can get is 18min.  This is on one of the new lenovo
> p710 machines with 16 xeon cores.

Nope, full clobber builds: `./mach clobber; time ./mach build`. (I have
the same machine, FWIW.) The svg files I uploaded were from `mach
resource-usage`, which has nice output but not a good way to share the
resulting data externally. I didn't save the actual output of `time`
anywhere, but going back through my IRC logs the first build I did on
the machine took 15:08.01, the second (where all the source files ought
to be in the filesystem cache) took 14:58.24, and then another build I
did with Defender's real-time indexing disabled took 14:27.73. We should
figure out what the difference is between our system configurations,
3-3.5 mins is a good chunk of time to be leaving on the table!
Similarly, I heard from someone (I can't remember who it was) that said
they could do a Linux Firefox build in ~8(?) minutes on the same
hardware. (I will try to track down the source of that number.) That
gives us a fair lower-bound to shoot for, I think.

> I definitely observed periods where it was not CPU bound.  For example,
> at
> the end of the js lib build I observed a single cl.exe process sit for ~2
> minutes while no other work was being done.  I also saw link.exe take a
> long time without parallelism, but i think that's a known issue.

Yeah, I specifically meant "CPU-bound during the compile tier", where we
compile all the C++ code. If you look at the resource usage graphs I
posted it's pretty apparent where that is (the full `mach
resource-usage` HTML page has a nicer breakdown of tiers). The stuff
before and after compile is not as good, and the tail end of compile
gets hung up on some long-pole files, but otherwise it does a pretty
good job of saturating available CPU. I also manually monitored disk and
memory usage during the second build, and didn't see much there. The
disk usage showed ~5% active time, presumably mostly the compiler
generating output, and memory usage seemed to be stable at around 9GB
for most of the build (I didn't watch during libxul linking, I wouldn't
be surprised if it spikes then).

-Ted
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to