Re: autoreconf --force seemingly does not forcibly update everything

2024-04-09 Thread Nick Bowler
On 2024-04-09 18:06, Sam James wrote:
> Nick poses that a specific combination of tools is what is tested and
> anything else invalidates it. But how does this work when building on
> a system that was never tested on, or with different flags, or a
> different toolchain?
>
> It's reasonable to say "look, if you do this, please both state it
> clearly and also do some investigation first to see if you can
> reproduce it with my macros", but I don't think it's a crime for
> someone to attempt it either.

To be clear, I don't mean to suggest that modifying a package by
replacing m4 sources with different versions and/or regenerating
configure with a different version of Autoconf is something that
should never be done by downstream distributors.  If doing this
solves some particular problem, then by all means do it, that's
an important part of what free software is all about.

What I have a problem with is the suggestion that distributors should
systematically throw away actually-tested configure scripts by just
discarding any m4 source files that appear to be copied from another
project (Gnulib, in this case), copying in new ones from a possibly
different version of that project, regenerating the configure script
using a possibly different version of Autoconf, and then expecting
that this process will produce high-quality results.

Cheers,
  Nick



[sr #111048] Add a syntax check to code snippets

2024-04-09 Thread Martin Nilsson
Follow-up Comment #5, sr #111048 (group autoconf):

[comment #4 comment #4:]
> Note that adding a syntax check before each check will certainly make
"configure" somewhat slower.  Should we prefer increasing security at the cost
of something slower?

You only need to run the syntax checker if the test fails to compile.


___

Reply to this item at:

  

___
Message sent via Savannah
https://savannah.gnu.org/




Re: autoreconf --force seemingly does not forcibly update everything

2024-04-09 Thread Bruno Haible
Sam James replied to Bruno Haible who cited Nick Bowler:
> >> If I distribute a release package, what I have tested is exactly what is
> >> in that package.  If you start replacing different versions of m4 macros,
> >> or use some distribution-patched autoconf/automake/libtool or whatever,
> >> then this you have invalidated any and all release testing.
> >
> > +1
> >
> > Last month, I spent 2 days on prerelease testing of coreutils. If, after
> > downloading the carefully prepared tarball from ftp.gnu.org, the first
> > thing a distro does is to throw away the *.m4 files and regenerate the
> > configure script with their own one,
> >   * It shows no respect for the QA work that the upstream developers have
> > put in.
> ...
> Nick poses that a specific combination of tools is what is tested and
> anything else invalidates it.

Correct. When an upstream developer/tester has tested a tarball in N
situations, then that tarball is what - he can guarantee - works. The
more changes a distro applies, the more they do so on their own risk.

> But how does this work when building on a
> system that was never tested on, or with different flags, or a different
> toolchain?

The interface of GNU 'configure' [1][2] accommodates for these cases.
In most of these cases, it is not needed to rebuild 'configure'. Just
set CC, CFLAGS, LDFLAGS etc.
  - For other systems: see [3].
  - System that is still in development: Replace config.guess, config.sub,
and potentially config.rpath with modified variants.
  - Different flags: That's what CFLAGS, CXXFLAGS are for.
  - Different compiler: E.g. clang on Ubuntu 22.04
CC="/inst-clang/17.0.4/bin/clang -Wl,-rpath,/inst-clang/17.0.4/lib"
CXX="/inst-clang/17.0.4/bin/clang++ -I/usr/include/c++/11 
-I/usr/include/x86_64-linux-gnu/c++/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 
-Wl,-rpath,/inst-clang/17.0.4/lib"
  - Different linker: E.g. to link with 'mold' instead of 'ld',
create a directory /alternative-ld/with-mold that contains a symlink
ld -> /.../bin/mold
and use CC="gcc -B/alternative-ld/with-mold".

On exotic systems with non-ELF binary format, modifying libtool.m4 is needed.
But most distros are not in this situation; they use the glibc or musl dynamic
loader, hence they don't need libtool.m4 changes.

> It also has value in the context of software which is no longer
> maintained but needs to work on newer systems.

Granted; that's a different category of situation. Here a distro will
probably not only need to change *.m4 files but also *.c files. And
hopefully submit the changes upstream.

> We don't apply this rule to anything else -- you've never rejected a
> report from me because I have a newer version of a library installed
> like openssl or similar. Why is this different?

As an upstream maintainer, I have chosen (or, well, GNU has chosen for me,
even before I was present) to give *tarballs* to my users, not git
repositories. The main differences are that

  - tarballs contain some generated files for which we want to spare
the user the need to install special tools and get familiar with
them (code generators, doc formatters [texlive, doxygen, ...] etc.)

  - tarballs contain source code from other packages (git submodule,
parts of gnulib, etc.)

  - tarballs contain localizations, which are maintained elsewhere than
in the git repository (e.g. in translationproject.org or weblate
instances).

Experience has shown that this interface (tarballs with configure
script) allows for relatively effective support.

There are an entire bunch of questions from users of a git repository
(from "can you please commit the formatted documentation into git?"
over "how to I pull in the submodules?" to "why do I get this error
from flex?") that are obsoleted by this interface.

Also, too often people have reported problems with older versions of
the tools. I mean, we are at Automake 1.16.5, and if someone wants
to rebuild my package with Automake 1.13.4 because that's what his
distro is carrying, and they encounter problems, it is just a waste
of upstream developer's time. Old bugs in old versions of the tools
have been fixed. As an upstream maintainer, I don't want to support
  - different versions of Automake,
  - different versions of Bison,
  - different versions of texinfo,
  - different versionf of groff,
  - etc.
I have enough work supporting
  - different versions of the OS (glibc, Cygwin, etc.),
  - different versions of GNU make,
  - different versions of gcc and clang,
  - different versions of packages with optional support (--with-* options),
  - ...
Keep the test matrix small!

Bruno

[1] https://www.gnu.org/prep/standards/html_node/Configuration.html
[2] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/html_node/Preset-Output-Variables.html
[3] https://gitlab.com/ghwiki/gnow-how/-/wikis/Platforms/Configuration






Re: autoreconf --force seemingly does not forcibly update everything

2024-04-09 Thread Sam James
Bruno Haible  writes:

> Nick Bowler wrote:
>> If I distribute a release package, what I have tested is exactly what is
>> in that package.  If you start replacing different versions of m4 macros,
>> or use some distribution-patched autoconf/automake/libtool or whatever,
>> then this you have invalidated any and all release testing.
>
> +1
>
> Last month, I spent 2 days on prerelease testing of coreutils. If, after
> downloading the carefully prepared tarball from ftp.gnu.org, the first
> thing a distro does is to throw away the *.m4 files and regenerate the
> configure script with their own one,
>   * It shows no respect for the QA work that the upstream developers have
> put in.

This reads as taking it a bit personally to me.

Nick poses that a specific combination of tools is what is tested and
anything else invalidates it. But how does this work when building on a
system that was never tested on, or with different flags, or a different
toolchain?

It's reasonable to say "look, if you do this, please both state it
clearly and also do some investigation first to see if you can reproduce
it with my macros", but I don't think it's a crime for someone to
attempt it either.

It also has value in the context of software which is no longer
maintained but needs to work on newer systems.

We don't apply this rule to anything else -- you've never rejected a
report from me because I have a newer version of a library installed
like openssl or similar. Why is this different?

>   * It increases the number of bug reports that reach upstream and yet
> are caused by the distro.
>   * In the long run, the upstream maintainers will be less willing to
> handle bug reports from users of this distro.
>

I do sympathise with these points and also that it might be
overwhelming given it removes the ability to fix some elements of the
build environment. Right now, you can at least assert that it builds in
a diverse set of places based on what you tested.

> Bruno

thanks,
sam


signature.asc
Description: PGP signature