Michael Pogue wrote:
Do you think it's possible to change the build, such that more can go on in parallel? Or, are there some limiting steps that are hard to eliminate (if so, where do you think they are?)?

Our build system does the dependency analysis depth-first. Let's say I want to do an incremental from usr/src/uts; the build has to traverse every subdirectory and examine all of the dependencies. This takes a long time. Even if everything does need to be built, which isn't as often, because the leaf nodes have to be built first any subdirectories with a few files end up serializing the build when we get into them. Another side effect of this I've noticed is that when big binaries are being linked as part of the build process things serialize, because these are high-level leaf makefile targets. So while one link process is churning away for 30-60 seconds (watch genunix link to see what I mean) everything else has to wait.

No doubt many others have done more specific analysis and can share their thoughts also.

A more parallel build solution would be to build the dependency graph once, and then start scheduling jobs onto CPUs top-down, running independent parts of the build fully in parallel. This would always keep plenty of parallel tasks running the build and avoid having any serialization, since the directory traversal could be done by the individual make target invocations and not by make itself. For instance there is no reason I can't build several libraries in /usr/lib while my genunix is linking.

Such a piece of work is not trivial. My hats off to anyone brave enough to try to implement such a new build system. But, it would make us ALL a lot more productive. The most recent numbers I saw estimated that nightly on SPARC is 50% serial when run on a good size machine (~8 CPUs and 16GB of RAM).

- Eric
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to