> So, I think you are saying, that for pieces in a mkfile that take less than
> 1s to build it is possible for them to be build again, unnecessarily, when
> mk is run again.  This is normal and just the way it is.  Is that correct?

Correct except for "just the way it is".  There is a principle
involved which is so pervasive to Plan 9 that we often forget to make
it explicit.  To quote Ken Thompson: "Throughout, simplicity has been
substituted for efficiency.  Complex algorithms are used only if their
complexity can be localized."  He was writing in 1978 about UNIX, but
Plan 9 follows firmly in this tradition.  (Linux not so much.)

Using the existing file time stamps costs some efficiency, when
targets are built more often than necessary.  The question is, how
significant is this cost compared to the complexity of adding higher
time resolution?  Note that it's not necessary to run mk repeatedly
until it converges -- the algorithm is conservative in the sense that
it will not build less than required.

So, how many seconds is the unnecessary building of targets actually
costing?


Reply via email to