* NightStrike wrote on Fri, Jan 14, 2011 at 05:12:46AM CET: > On 1/13/11, Ralf Wildenhues wrote: > > make is a bit flawed for real large projects because it always walks > > the whole dependency graph, unlike beta build systems who use a notify > > daemon and a database to only walk subgraphs known to be outdated. > > How big is real large? GCC uses make, for instance, and it's the > biggest public project that I personally know about.
Running 'make' in an up to date GCC build tree and hot file cache takes on the order of 10s for me. That may be tolerable, but only because any changes causing recompiles lead to much longer build times, and because the GCC build system provides lots of special-case make targets to only rebuild parts of the tree (e.g., stage3-bubble, all-target-libgfortran) which finish more quickly. The GCC build system is fairly special though, in that it is pretty complex and (ab?)uses recursive makefiles also to allow ignoring big chunks of the dependency tree in some cases. (That, by the way, is what nobody ever tells you when they point you to the "Recursive Make Considered Harmful" paper: that walking the full dependency tree is more expensive than walking a factorized one. Sure, you need to keep the toplevel deps up to date manually then.) > At what magnitude does make break down, do you think? And how/where > does it become flawed? A rough estimate is 100k files, even after quadratic scaling issues in make are fixed. The tup project (which is a prototype beta build system) claims some lower numbers even[1], but their graphs indicate that they've also measured some nonlinear effects in make, which should all be fixable. I'd guess that webkit would benefit slightly already though. > In retrospect, even when dealing with GCC, I never do a partial > rebuild. For full rebuilds, this particular issue is irrelevant. There, the whole dependency tree needs to be walked no matter what build system. Cheers, Ralf [1] http://gittup.org/tup/make_vs_tup.html