On Tue, Jun 18, 2024, at 12:02 AM, Jacob Bachmeyer wrote:
> Zack Weinberg wrote:
>> On Mon, Jun 17, 2024, at 10:30 PM, Jacob Bachmeyer wrote:
>>
>> I regret to say, yes, there are. For example, this can happen with
>> NFS if there are multiple clients updating the same files and they
>> don't all agree on the current time. Think build farm with several
>> different configurations being built out of the same srcdir -
>> separate build dirs, of course, but that doesn't actually help here
>> since the issue is ensuring the Makefile doesn't think *configure*
>> (not config.status) needs rebuilt.
>
> Wait... all of configure's non-system dependencies are in the release
> tarball and presumably (if "make dist" worked correctly) backdated
> older than configure when the tarball is unpacked.

In my experience, tarballs cannot be trusted to get this right, *and*
tar implementations cannot be trusted to unpack them accurately
(e.g. despite POSIX I have run into implementations that defaulted to
the equivalent of GNU tar's --touch mode).  Subsequent bounces through
downstream repackaging do not help.   Literally as I type this I am
watching gettext 0.22 run its ridiculous number of configure scripts a
second time from inside `make`.

> Does "make dist" need to touch configure to ensure that it is newer
> than its dependencies before rolling the tarball?

It ought to, but I don't think that will be more than a marginal
improvement, and touching the top-level configure won't be enough,
you'd need to do a full topological sort on the dependency graph
leading into every configure + every Makefile.in + every other
generated-but-shipped file and make sure that each tier of generated
files is newer than its inputs.

I wonder if a more effective approach would be to disable the rules to
regenerate configure, Makefile.in, etc. unless either --enable-maintainer-mode
or we detect that we are building out of a VCS checkout.

zw

Reply via email to