On Sat, Mar 30, 2024 at 08:44:36AM -0700, Russ Allbery wrote:
> Luca Boccassi <bl...@debian.org> writes:
> 
> > In the end, massaged tarballs were needed to avoid rerunning autoconfery
> > on twelve thousands different proprietary and non-proprietary Unix
> > variants, back in the day. In 2024, we do dh_autoreconf by default so
> > it's all moot anyway.
> 
> This is true from Debian's perspective.  This is much less obviously true
> from upstream's perspective, and there are some advantages to aligning
> with upstream about what constitutes the release artifact.

My upstream perspective is that I've burned repeatedly with
incompatible version changes in autotools programs which causes my
configure.{in,ac} file to no longer create a working configure script,
or which causes subtle breakages.  So my practice is to use autoconf
on my Debian testing development system before checking in the
configure.ac and configure files --- but I ship the generated files
and I don't tell people to run autoreconf before running ./configure.
And if things break after they run autoreconf, I tell them, "you ran
autoreconf; you get to keep both pieces".

And there *have* been times when autoconf has gotten updated in Debian
testing, and the resulting configure script has broken, at which point
I curse at autotools, and fix the configure.ac and/or aclocal.m4
files, etc., and *then* check in the generated configure file and
autotool source files.

> Yes, perhaps it's time to switch to a different build system, although one
> of the reasons I've personally been putting this off is that I do a lot of
> feature probing for library APIs that have changed over time, and I'm not
> sure how one does that in the non-Autoconf build systems.  Meson's Porting
> from Autotools [1] page, for example, doesn't seem to address this use
> case at all.

The other problem is that many of the other build systems are much
slower than autoconf/makefile.  (Note: I don't use libtool, because
it's so d*mn slow.)  Or building the alternate system might require a
major bootstrapping phase, or requires downloading a JVM, etc.

> Maybe the answer is "you should give up on portability to older systems as
> the cost of having a cleaner build system," and that's not an entirely
> unreasonable thing to say, but that's going to be a hard sell for a lot of
> upstreams that care immensely about this.

Yeah, that too.  There are still people building e2fsprogs on AIX,
Solaris, and other legacy Unix systems, and I'd hate to break them, or
require a lot of pain for people who are building on MacPorts, et. al.
It hasn't been *all* that long ago that I started require C99
compilers....

That being said, if someone who was worried about an Jia Tan-style
attack with e2fsprogs, first of all, you can verify that configure
corresponds to autoconf on the Debian testing at the time when the
archive was generated, and the officially released tar file is
generated via:

git archive --prefix=e2fsprogs-${ver}/ ${commit} | gzip -9n > $fn

... and the release tarballs are also in the pristine-tar branch of
e2fsprogs.  So even if kernel.org (preferred) and sourceforget.net
(legacy) servers for the e2fsprogs tar files completely implodes, and
you only have access to the git repo, you can still get the original
e2fsprogs tar files using pristine-tar.

                                                - Ted

Reply via email to