I came across this thread in the list archives and so apologize for the lack of thread id headers.
AT&T maintains a branch of nmake that split from Lucent in '95. There is a short overview at http://www.research.att.com/~gsf/nmake/ Two parts of the thread caught my eye: state and dependency scanning. Maintaining state opens up many opportunities for streamlining the amount of tools and files required to support project builds. The state is a repository not only of what was built but what was used to build. It is also a convenient place to stash intermediate information like include dependencies. State naturally leads to automatically maintained common actions like clean, clobber, tgz. State can also store the commands and options used to generate each target; changes to these can be used in future out-of-date tests. The choice of letting the compiler scan for dependencies vs. doing them a priori in make is largely based on the amount of control you have over the compiler. A priori scanning allows make to generate dependencies that otherwise would not be discovered until compiler failure; the worst case might need n failures to discover all generated dependencies for a single source file. A priori scanning can also be done incrementally; only those files changed since the last scan need to be rescanned. There are ways to defeat a priori scanners (macro expansion in #include) but this hasn't been a problem in practice. Assuming (gnu) makefiles must be specially constructed to handle compiler based scans, what happens when CC=one-that-does-not-scan? -- Glenn Fowler -- AT&T Labs Research, Florham Park NJ -- _______________________________________________ Bug-make mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-make