On 8/22/12 2:02 AM, Neil wrote:
Gregory Szorc wrote:

Up until now, the focus has been on making Makefile.in's themselves
generic and data-driven. We would use pymake's API to parse, load, and
extract data from Makefile.in's to construct the build definition. In
the long run, we'd realize that using make files for data definition
was silly (and a large foot gun) and thus we would switch to something
else.

Switch to something else for all Makefile.in's or just the generic ones?


The plan would be to have no Makefile.in's, as the presence of a Makefile implies usage of make, which may not always be the case. We want to support building with alternate build backends and that is difficult if large parts involve one-off usages of make. We may use make for some specific tasks (because it is a decent dependency-action DSL). But, for building the core of mozilla-central, use of Makefile's as the *definition* of the build config will likely be marginalized.

On 8/22/12 6:10 AM, Robert Kaiser wrote:> Gregory Szorc schrieb:
>> We could go the route of GYP and shoehorn conditionals into a static
>> document (JSON) [3].
>
> JSON is a good format for data for the most part, but IMHO we *really*
> want comments in those files, and unfortunately JSON doesn't have those
> and therefore probably must be thrown out of the equation. :(
>
> Actually, I know a format that allows everything we need: Makefile! :p

I actually agree with you! This was our initial plan: load Makefiles into pymake and evaluate the values of specific variables, like CPPSRCS. Unfortunately, there are some deal-breakers. The evaluation model of make is... complicated. Lazy evaluation, rules that must be executed before variables can be read, side-effects from evaluating variables, etc. There are a *lot* of pitfalls.

Switching to something else also has another advantage: the opportunity for a clean slate. I'm hoping we'll use the opportunity to scrape away 10+ years of cruft.

> Seriously, this is hard, and needing to parse even more files to build
> doesn't sound faster and cleaner, but rather slower and more complex,
> esp. given that basic file I/O is often costly (from watching my CPU
> usage, a lot of the build time is spent in I/O wait when using spinning
> disks - SSDs improve that hugely).

On my browser build, we place 19000+ files in the objdir. That's not counting all the files in the srcdir that need to be stat() or read during builds. I think the overhead of a few hundred build manifest files won't be too bad. It will also likely be mitigated through caching.

Yes, I/O wait can significantly reduce build times. I recommended at [1] that aside from a modern CPU, investing in an SSD is the best thing you can do for build times. If a build machine doesn't have an SSD, it's effectively throwing away CPU cycles in I/O wait. It's true that the build system today wastes a lot of CPU cycles due to inefficient use of all available cores (lack of wide parallelism due to recursive make). But, the efficiency should rise drastically with our build system improvements. That will make the impact of an SSD even more pronounced. On my MBP (with 8GB), building with an SSD reduced overall build time by a few minutes, with libxul linking going from ~55s to ~5s. That's without any build system changes! There is no doubt in my mind that SSDs are worth the investment.

[1] http://gregoryszorc.com/blog/2012/07/29/mozilla-central-build-times/
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to