Re: \# within quotes

2024-07-20 Thread Nick Bowler
On 2024-07-18 09:40, Zack Weinberg wrote:
> On Thu, Jul 18, 2024, at 5:09 AM, Tijl Coosemans wrote:
>> Automake 1.17 produces a warning for the use of \# ...
>
> For reference, the construct in question is
>
> subst = sed \
>   -e 's|@PACKAGE_VERSION[@]|$(PACKAGE_VERSION)|g' \
>   -e '1 s|^\#\!.*perl$$|\#\!$(PERL)|g' \
>   -e 's|@localstatedir[@]|$(localstatedir)|g' \
> [etc]
[...]
> I could be wrong about this, but I don't think any implementation of Make
> pays _any_ attention to shell quotation in commands.  Therefore, no, this
> is not portable.

Note that there is only any problem at all because the # characters are
used in a make variable assignment.  If it was written literally into a
rule instead, for example:

  substitute:
sed -e '1s|^#!.*perl$$|#!$(PERL)|g' ...

there is no portability problem, as # does not introduce a make comment
when it appears within the commands of a rule.

If the only reason for using a make variable is just to reduce typing
(because it's used in more than one rule) one simple option is to use
a configure-time substitution instead (totally untested):

  configure.ac:
AC_SUBST([blargh], ['sed -e '\''1s|^#!.*perl|#!$(PERL)|'\'])
AM_SUBST_NOTMAKE([blargh])

  Makefile.am:
substitute:
@blargh@ ...

Cheers,
  Nick



Re: [platform-testers] automake-1.16.92 released

2024-07-03 Thread Nick Bowler
On 2024-07-01 10:21, Zack Weinberg wrote:
> # clue Make that gen-foo also updated foo.h whenever foo.c is new
> foo.h: foo.c
> @:
> 
> If I had to guess, I would guess that someone thought Make would be
> more likely to skip invoking a shell if the command was actually empty
> rather than ":".  As it happens, GNU Make 4.4.1 appears to recognize
> ":" as a no-op; using strace I see it issue the same number of forks
> for both constructs.  But perhaps older versions of gnumake did not do
> this.  (This is clearly not a portable makefile to begin with, so
> questions of what other implementations do are moot.)

FWIW the POSIX-standard way to define a target rule with no commands it
to follow its prerequisites with a semicolon, for example:

  foo.h : foo.c ;

This is supported by literally every make implementation that I am
aware of, all the way back to the original make from UNIX V7.

Cheers,
  Nick



Re: automake-1.16.92 released

2024-06-29 Thread Nick Bowler
On 2024-06-29 07:28, Dave Hart wrote:
> I'm seeing a regression building ntpd on FreeBSD 12.1 amd64 with
> Autoconf 2.71 between Automake 1.16.5 and 1.16.92.  I haven't filed a
> bug report yet as I'm trying to do my part to characterize it well and
> provide an easy reproduction.  It may well be a bug in our use of
> Automake, in which case I apologize in advance, but I wanted to give a
> heads-up in case it affects a decision to release 1.17 before I get a
> good report together.
> 
> The divergence in behavior starts with:
> 
> autoreconf: configure.ac: not using Libtool

This is the first problem, so if you focus on solving this one probably
the later problems are all related.

Without a reproducer to look at I can only speculate why things are going
wrong for you.  But here is some details which might help your debug:

autoreconf decides to run libtoolize based on an m4 trace right after
it runs aclocal, looking for expansion of LT_INIT (or the older
AC_PROG_LIBTOOL).

This means that for this to work at all:

 (1) a macro definition of LT_INIT must be available at this time, and
 (2) the LT_INIT macro must actually be expanded directly, and
 (3) tracing must not be disabled.

Normally aclocal will take care of (1).  Since this tool is part of
Automake, this is something you have changed in your setup so it's
plausible this is the underlying cause of your problem.

aclocal works basically by grepping your configure.ac and all the files
it knows about looking for things that look like macro definitions and 
things that look like macro expansions, and copying in any missing
definitions.  So for this to work:

  (1.1) aclocal must know where to find the definition of LT_INIT.
  (1.2) aclocal must see the place where LT_INIT is expanded.

Normally, aclocal and libtool are installed to the same prefix, libtool
will install its macros into the default aclocal search path, and
aclocal will find the macro definitions.  If they are installed into
different prefixes, aclocal will need help, you can use the dirlist
mechanism (recommended for a permanent installation) or for a quick fix,
set the ACLOCAL_PATH environment variable to the installed location of
the libtool macros.

Another way this can work is if aclocal happens to pick up the macros
copied into the package from a prior run of libtoolize.

So check the generated aclocal.m4 after running autoreconf when you
encounter this problem.  I suspect that the libtool macros are missing
(noting that they may be incorporated indirectly via m4_include).

Hope that helps,
  Nick



Re: RFC: Add public macros AC_LOG_CMD and AC_LOG_FILE.

2024-06-24 Thread Nick Bowler
On 2024-06-24 10:04, Zack Weinberg wrote:
> On Mon, Jun 24, 2024, at 2:56 AM, Nick Bowler wrote:
>> I think at the same time it would be worth documenting the AS_LINENO
>> functionality, which is the main internal functionality of these
>> macros that (unless you just goes ahead and use it) Autoconf users
>> can't really replicate in their own logging.
> 
> I believe what you mean is you want _AS_ECHO_LOG to be promoted to a
> documented and supported macro, and for AS_LINENO_PUSH and AS_LINENO_POP
> also to be documented for external use.  Is this correct?  Did I miss
> any other internal macros that ought to be available for external
> use?

> I don't think we should tell people to be using $as_lineno directly,
> is there some use case for it that isn't covered by existing macros?

On reflection, I think I may have had a mistaken understanding of the
purpose of the as_lineno when I last looked at these macros.  I assumed
it was related to supporting shells without LINENO support but it seems
it is not the case, so maybe nothing is actually needed.

Perhaps a link from the (very short) description of AS_LINENO_PREPARE[1]
to the description of LINENO[2] might have helped.

That being said, something like AS_ECHO_LOG (with or without
AS_LINENO_PUSH/POP) looks generally useful, although I don't have an
immediate use case offhand besides implementing an AC_RUN_LOG workalike.

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/autoconf.html#index-AS_005fLINENO_005fPREPARE
[2] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/autoconf.html#index-LINENO-1

[...]
> Will do.  The main point of the macro is that it does something a little
> fancier than "cat file", so it's unambiguous where normal log output
> resumes. Like the existing _AC_MSG_LOG_CONFTEST does:
> 
> configure: failed program was:
> | /* confdefs.h */
> | #define PACKAGE_NAME "lexlib-probe"
> | #define PACKAGE_TARNAME "lexlib-probe"
> | #define PACKAGE_VERSION "1"
> | ... etc ...
> configure: result: no
> 
> The "label" will go where _AC_MSG_LOG_CONFTEST prints "failed program was".

Looks great, this example output definitely helps to understand when one
might want to use this macro.

Cheers,
  Nick



Re: RFC: Add public macros AC_LOG_CMD and AC_LOG_FILE.

2024-06-23 Thread Nick Bowler
On 2024-06-23 22:23, Zack Weinberg wrote:
> I'm thinking of making AC_RUN_LOG, which has existed forever but is
> undocumented, an official documented macro ...

Yes, please!

I will note that Autoconf has a lot of "run and log a command" internal
macros with various comments of the form "doesn't work well" suggesting
that this is a hard feature to get right.

I think at the same time it would be worth documenting the AS_LINENO
functionality, which is the main internal functionality of these
macros that (unless you just goes ahead and use it) Autoconf users
can't really replicate in their own logging.

> +@anchor{AC_LOG_FILE}
> +@defmac AC_LOG_FILE (@var{file}, @var{label})
> +Record the contents of @var{file} in @file{config.log}, labeled with
> +@var{label}.
> +@end defmac

If you implement this, please explain in the manual what "labeled with
/label/" really means, otherwise I'm left wondering why this macro
exists when we can almost as easily write something like:

  { echo label; cat file; } >&AS_MESSAGE_LOG_FD

Including example logfile output together with the example program
might be sufficient.

Cheers,
  Nick



Re: 1.16.90 regression: configure now takes 7 seconds to start

2024-06-16 Thread Nick Bowler
On 2024-06-16 21:35, Jacob Bachmeyer wrote:
> I think we might best be able to avoid this by using AC_CONFIG_COMMANDS_POST
> to touch config.status if neccessary, instead of trying to decide
> whether to sleep before writing config.status.

If the problem is simply that we want to avoid the situation where
"make" considers config.status to be out of date wrt. configure, or
something similar with any other pair of files, then this should be
solveable fairly easily with a pattern like this (but see below):

  AC_CONFIG_COMMANDS_POST([cat >conftest.mk <<'EOF'
  configure: config.status
false
  EOF
  while ${MAKE-make} -f conftest.mk >/dev/null 2>&1
  do
touch config.status
  done])

In my own experience the above pattern is portable.  It works with HP-UX
make.  It works with a "touch" that truncates timestamps.  In the common
case where configure is sufficiently old the loop condition will always
be false and there is no delay.

It won't guarantee that config.status has a strictly newer timestamp
than configure (except on HP-UX), but it sounds like that's fine.

One missing element is that there is no limit, which would be a bit of a
problem if the clock skew is severe (e.g., if configure's mtime is years
or even minutes in the future), so something extra is probably desirable
to bound the amount of time this runs to something practical.

Cheers,
  Nick



Re: End of life dates vs regression test matrix

2024-06-14 Thread Nick Bowler
On 2024-06-14 10:09, Dan Kegel wrote:
> Oh, ok, perhaps I was confused by the note in automake 1.13.1's NEWS file
> (iirc), which said
> 
> Support for IRIX and the SGI C/C++ compilers will be removed in
>   Automake 1.14: they have seen their last release in 2006, and SGI
>   is expected to retire support from them in December 2013
> 
> Did that not happen?

This note was removed from the NEWS file in Automake 1.13.2 and replaced
with a note that only depcomp support for automatic dependency tracking
was planned to be removed in Automake 2.0.

"Automake 2.0" never materialized (probably won't) and the depcomp
script today still has the code to handle SGI compilers (Sadly, I don't
currently have an SGI/IRIX setup in my collection to test this).

But even if it were removed automatic dependency tracking is not an
important feature: packages can still be built fine without it.

Cheers,
  Nick



Re: 1.16.90 regression: configure now takes 7 seconds to start

2024-06-08 Thread Nick Bowler
On 2024-06-07 19:26, Jacob Bachmeyer wrote:
> Bruno Haible wrote:
>> [I'm writing to automake@gnu.org because bug-autom...@gnu.org appears
>> to be equivalent to /dev/null: no echo in
>> https://lists.gnu.org/archive/html/bug-automake/2024-06/threads.html
>> nor in https://debbugs.gnu.org/cgi/pkgreport.cgi?package=automake,
>> even after several hours.]
>>
>> In configure scripts generated by Autoconf 2.72 and Automake 1.16.90,
>> one of the early tests checking filesystem timestamp resolution...
>> takes 7 seconds! Seen e.g. on NetBSD 10.0.
[...]
> The problem with the proposed patch is that it tries to read a
> filesystem name instead of testing for the feature.  This would not be
> portable to new systems that use a different name for their FAT
> filesystem driver.

Maybe this is a silly question, but, is there a reason why this test
needs to be performed in every single package that uses Automake?

I was under the impression that the purpose of this test was merely
to speed up running Automake's own test suite.

Cheers,
  Nick



Re: Getting long SOURCES lines with subdirs shorter

2023-12-01 Thread Nick Bowler
On 2023-12-01 15:37, Jan Engelhardt wrote:
> On Friday 2023-12-01 21:13, Mike Frysinger wrote:
>> On 17 Jul 2023 16:51, Karl Berry wrote:
>>> Hi Jan,
>>>
>>> Current automake likely won't have anything in store already,
>>>
>>> Not that I know of.
>>>
>>> a_SOURCES = $(addprefix aprog/,main.c foo.c bar.c baz.c)
>>>
>>> I've often wanted this myself. I'd certainly welcome a patch for it.
>>>
>>> Please work from automake trunk. None of the various branches are kept
>>> to date. (Sad but that's the reality.)
>>
>> prob stating the obvious, but $(addprefix) is a GNUism, so if we wanted to
>> use it, it'd required feature probing at configure time, and that always
>> complicates things :(
> 
> No-no, the idea was to make $(addprefix) an automakeism that is resolved 
> before
> GNU make (or any other make) is ever invoked.

I suggest inventing a new syntax if this approach is taken, one that
doesn't overload real-world make syntax, since some people do use
Automake with GNU-make-specific rules and whatnot.  We already have
things like %reldir% which are expanded by Automake so maybe using
percent signs as a marker for "things expanded by automake" would
be a good starting point for this.

I do sometimes wish Automake had more built-in macro facilities.
One can do things like generate includeable snippets or preprocess
Makefile.am with, say, m4, but that adds a bunch of additional
complexity which is not always worthwhile.

Cheers,
  Nick



Re: Detect --disable-dependency-tracking in Makefile.am

2023-09-30 Thread Nick Bowler
On 2023-09-30, Nick Bowler  wrote:
> Two suggestions, one relying on Automake internals and one not:
>
> Suggestion 1)
> internal ... Automake conditional called AMDEP.
[...]
> Suggestion 2)
[...]
> AM_CONDITIONAL([NO_DEPS], [test x"$enable_dependency_tracking" = x"no"])

> Note that these approaches are different in the case where dependency
> tracking is disabled because it is not supported by the user's tools,
> rather than by explicit request.  This may or may not matter for your
> use case.

Nevermind this last point, these suggestions are functionally identical;
the AMDEP conditional also only handles the "explicitly disabled" case
and is defined almost exactly the same way as NO_DEPS above (test is
just reversed).

So there is no reason to use #1 except to save a line in configure.ac.

Cheers,
  Nick



Re: Detect --disable-dependency-tracking in Makefile.am

2023-09-30 Thread Nick Bowler
On 2023-09-29, Dave Hart  wrote:
> I'm guessing someone has trod this ground before.  I'd appreciate
> pointers to examples of how others have detected
> --disable-dependency-tracking to change their build behavior.

Two suggestions, one relying on Automake internals and one not:

Suggestion 1) It is technically undocumented, but longstanding Automake
behaviour is that dependency tracking is internally implemented using an
Automake conditional called AMDEP.  So you can literally just write in
Makefile.am:

  if AMDEP
  # stuff to do when dependency tracking is available
  else
  # stuff to do when dependency tracking is unavailable or disabled
  endif

Suggestion 2) All explicit --enable-foo/--disable-foo arguments to
a configure script are available in shell variables; in the case of
--disable-dependency-tracking you can do something like this in
configure.ac:

  AM_CONDITIONAL([NO_DEPS], [test x"$enable_dependency_tracking" = x"no"])

then in Makefile.am:

  if NO_DEPS
  # stuff to do when dependency tracking is disabled
  else
  # stuff to do otherwise
  endif

Note that these approaches are different in the case where dependency
tracking is disabled because it is not supported by the user's tools,
rather than by explicit request.  This may or may not matter for your
use case.

Hope that helps,
  Nick



Re: if vs. ifdef in Makefile.am

2023-03-01 Thread Nick Bowler
On 2023-03-01, Jan Engelhardt  wrote:
> You can utilize the same mechanism behind automake's `make V=1`:
>
> NDEBUG = 0
> my_CPPFLAGS_0 =
> my_CPPFLAGS_1 = -NDEBUG
> my_CFLAGS_0 = -O3
> my_CFLAGS_1 =
> AM_CPPFLAGS = ${my_CPPFLAGS_${NDEBUG}}
> AM_CFLAGS = ${my_CFLAGS_${NDEBUG}}

This syntax is not standard or portable; Automake's silent-rules stuff
is backed by configure tests to ensure the syntax is supported by make.

That being said, with most make implementations it will be "fine", in
that make implementations tend to accept almost any line noise between
the brackets of a variable expansion without complaint, simply expanding
it to the empty string.  Even the original make from V7 UNIX works this
way.  So the flags will be missing but the build should still complete
as these flags are presumably not critical.

One exception is HP-UX, which doesn't match the brackets properly so
you will end up with a stray } in CPPFLAGS and CFLAGS, most likely
causing the build to fail.  You can work around this particular problem
by exploting make's multiple equivalent forms for variable expansion,
for example $(my_CPPFLAGS_${NDEBUG}) should be quite "portable" in
the sense that it will either work or you get the empty string.

But... Autoconf already has the AC_HEADER_ASSERT macro which adds a
--disable-assert configure option that sets NDEBUG... why not just use
that?  The other flags (-O3, -fsanitize=address) will need to be backed
by configure tests anyway as not all C compilers support these options,
so why not just add an --enable-debug or similar to control all this?

Cheers,
  Nick



Re: rhel8 test failure confirmation?

2023-03-01 Thread Nick Bowler
On 2023-03-01, Karl Berry  wrote:
> Does anyone have access to an RHEL 8-based machine? Alma Linux, Rocky
> Linux, original RHEL, or even (sort of) CentOS 8? It would be nice if
> someone could run a make check there (from automake dev).
>   git clone -q git://git.savannah.gnu.org/automake.git
>   cd automake
>   ./bootstrap
>   ./configure && make >&cm.out
>   make -j8 VERBOSE=1 check keep_testdirs=yes >&ch8.out
> (choose whatever -j value you like)

  % cat /etc/redhat-release
  Red Hat Enterprise Linux release 8.0 (Ootpa)

  % uname -r
  4.18.0-305.10.2.el8_4.x86_64

I ran it twice, the first time out of my user directory (NFS), with no
failures, the second out of /tmp (XFS), with one failure:

  FAIL: t/remake-aclocal-version-mismatch.sh
  [...]
  
  Testsuite summary for GNU Automake 1.16i
  
  # TOTAL: 2935
  # PASS:  2808
  # SKIP:  87
  # XFAIL: 39
  # FAIL:  1
  # XPASS: 0
  # ERROR: 0
  
  See ./test-suite.log
  Please report to bug-autom...@gnu.org
  

Cheers,
  Nick



Re: Old .Po file references old directory, how to start fresh?

2022-08-04 Thread Nick Bowler
On 2022-08-04, Travis Pressler via Discussion list for automake
 wrote:
> I'm learning how to make an autotools project and have created a test
> project to work with. I ran make with a directory `nested` and then deleted
> it and deleted the reference to it in my `Makefile.am`.
>
> Now I'm running ./configure && make and I get the following:
>
> *** No rule to make target 'nested/main.c', needed by 'main.o'. Stop.
>
> How can I run `make` so that it doesn't reference this old nested
> directory?

Sounds like just some stale dependencies left over from a prior version.

Running "make distclean" should delete all the automatically generated
dependency information and allow the package to be rebuilt normally.

Enabling the Automake subdir-objects feature probably would avoid the
specific scenario that led to your stale dependency problem.

Hope that helps,
  Nick



Re: Problem with build

2022-08-02 Thread Nick Bowler
Hi,

On 2022-08-01, aotto  wrote:
> but in ONE library I dont want to have a static library build because it
> is only used as dlopen (by tcl)…
[...]
> pkglib_LTLIBRARIES = libtclmkkernel.la
[...]
> question what I have to-do to avoid a "static" library "libtclmkkernel.a"

Since this seems to be a libtool question, I have added the libtool list
to Cc.

The following compilation option[1] seems appropriate:

  -shared

  Even if Libtool was configured with --enable-static, the object file
  Libtool builds will not be suitable for static linking.  Libtool
  will signal an error if it was configured with --disable-shared,
  or if the host does not support shared libraries.

And the following link option[2]:

  -shared
  If output-file is a program, then link it against any uninstalled
  shared libtool libraries (this is the default behavior). If output-
  file is a library, then only create a shared library. In the later
  case, libtool will signal an error if it was configured with
  --disable-shared, or if the host does not support shared libraries.

So, if you add -shared to libtclmkkernel_la_CFLAGS and also to
libtclmkkernel_la_LDFLAGS, I'd expect this to work as you expect
(I've not tried it).

[1] https://www.gnu.org/software/libtool/manual/libtool.html#Compile-mode
[2] https://www.gnu.org/software/libtool/manual/libtool.html#Link-mode

Hope that helps,
  Nick



Re: How to speed up 'automake'

2022-05-03 Thread Nick Bowler
On 2022-05-02, Karl Berry  wrote:
> - @echo '# dummy' >$@-t && $(am__mv) $@-t $@
> + @: >>$@
>
> 1) does it actually speed anything up?

The answer seems to be a resounding "yes".  I tried one of my packages
on an old slow PC, and changing this one line in Makefile.in cuts almost
5 seconds off of the depfiles generation step in config.status.

(All .deps directories manually deleted between runs as otherwise the
rule commands will not be executed).

  Before (x5):
  % time config.status Makefile depfiles
  real  0m15.320s
  real  0m15.210s
  real  0m15.210s
  real  0m15.210s
  real  0m15.220s

  After (x5):
  % time config.status Makefile depfiles
  real  0m10.650s
  real  0m10.550s
  real  0m10.550s
  real  0m10.550s
  real  0m10.650s

That 5 seconds is a relatively small part of total configure runtime but
it is noticeable.

So if make implementations have no problem including empty files (I tried
a few and all seem OK with it) then it seems like a win.

> 2) without the mv I fear we are no longer noticing write failure

I think it's OK.  All shells that I know of set a failure status when
redirection fails, at least for simple commands like that.

One possible gotcha is that redirections on the : command are not always
reliably performed by older shells.

There might not be any real world problem because configure sets SHELL
in the Makefile to one that probably does not exhibit any problem.  If
it matters, performing the redirection with "exec" instead of ":" should
work in every shell and have pretty much identical performance.

Cheers,
  Nick



Re: portability of xargs

2022-02-15 Thread Nick Bowler
On 2022-02-14, Mike Frysinger  wrote:
> context: https://bugs.gnu.org/53340
>
> how portable is xargs ?  like, beyond POSIX, as autoconf & automake both
> support non-POSIX compliant systems.  i want to use it in its simplest
> form: `echo $var | xargs rm -f`.

As far as I can tell xargs was introduced in the original System V UNIX
(ca. 1983).  This utility subsequently made its way back into V10 UNIX
(ca. 1989) and subsequently 4.3BSD-Reno (ca. 1990) and from there to
basically everywhere.  The original implementation from System V
supports the "-x", "-l", "-i", "-t", "-e", "-s", "-n" and "-p" options.
Of these, POSIX only chose to standardize "-x", "-t", "-s", "-n" and
"-p" suggesting possible incompatibilities with other options.

HP-UX 11 xargs expects the last filename to be followed by a white-space
character, or it will be ignored:

  gnu% printf 'no blank at the end' | xargs printf '[%s]'; echo
  [no][blank][at][the][end]

  hpux11% printf 'no blank at the end' | xargs printf '[%s]'; echo
  [no][blank][at][the]

The HP-UX 11 behaviour is also observed on Ultrix 4.5, but not on
4.3BSD-Reno.  Since xargs input typically ends with a newline, this is
not a serious practical problem.

Cheers,
 Nick



Re: adding a command line option for ACLOCAL_PATH-type search paths

2022-01-19 Thread Nick Bowler
On 19/01/2022, Mike Frysinger  wrote:
> the ACLOCAL_PATH functionality is useful (adding search dirs after -I),
> but a bit unwieldy as an env var.  any reason we can't add a command line
> option for this ?  call it --aclocal-path ?  or --extra-system-acdir ?
> or some other other boring name ?
>
> for context, when cross-compiling, autotools (i.e. automake) tend to be
> installed in the system (i.e. /usr/), while all the libraries & macros
> being built against are found in a separate sysroot (e.g. ~/sysroot/).
> we want to insert that ~/sysroot/usr/share/aclocal path after the set
> of -I flags from the package, but before /usr/share/aclocal.
> -mike

FWIW another option besides the env var is to create a third
directory, and put a file called "dirlist" in it with two lines:

  /path/to/sysroot/usr/share/aclocal
  /usr/share/aclocal

Then you can use the --system-acdir=/path/to/that/directory option
for aclocal to have it search both places.

Cheers,
  Nick



Re: Automake for RISC-V

2021-11-20 Thread Nick Bowler
On 20/11/2021, Billa Surendra  wrote:
> I have RISC-V native compiler on target image, but when I am compiling
> automake on target image it needs automake on target. This is the main
> problem.

Automake should not be required to install automake if you are using
a released version and have not modified the build system.

> In the same way I am trying to install texinfo on target image it
> also need automake on target.

Likewise, texinfo should also not require automake to install.

Cheers,
  Nick



Re: Automake for RISC-V

2021-11-18 Thread Nick Bowler
Hi Billa,

On 18/11/2021, Billa Surendra  wrote:
> Dear All,
>
> I have cross-compiled Automake-1.16.2 package with RISC-V cross compiler,
> but when I am executing binaries on RISC-V target OS image its gives errors
> like "not found".

Automake is written in Perl so it does not really get "compiled" in the
usual sense.

[...]
> $   ./configure  --prefix=/usr --host=riscv64-unknown-linux-gnu
> $ make -j8
> $  make DESTDIR=$risc-v_rootfs/ install
[...]
> *Error message (on risc-v rootfs):*
>
> ./aclocal
> -/bin/sh: ./aclocal: not found
>
> ./aclocal-1.16
> -/bin/sh: ./aclocal-1.16: not found

My first guess is that perl is not installed on the host (risc-v)
system.  Specifically, these files begin with #!/usr/bin/perl (or
similar - depends on configure tests) and that program is not
available when you run them.

However, I took a quick look at Automake's configure script and it
appears it detects perl only on the build system and then installs
that filename into the installed scripts.  So I think it will not
work out of the box unless a supported perl version is installed
at the same location on both the build and host machines.

Cheers,
  Nick



Re: `make dist` fails with current git

2021-10-13 Thread Nick Bowler
On 2021-10-13, Zack Weinberg  wrote:
> On Wed, Oct 13, 2021, at 2:11 PM, Nick Bowler wrote:
>> I think this happened because your CI system has done a shallow clone.
>> So the changelog generation failed because the git log is incomplete.
>
> I did a --single-branch clone, but not a shallow one.  Shouldn't the trunk
> be self-contained?

I think --single-branch should be fine: it seems to work for me...

But with e.g., a "--depth 1" clone I get the exact same error you
reported (because the commit IDs mentioned in .git-log-fix are not
available).

A possible workaround (only if you don't care about the resulting
changelog being correct) would be to truncate .git-log-fix before
running make dist.

Cheers,
  Nick



Re: `make dist` fails with current git

2021-10-13 Thread Nick Bowler
On 13/10/2021, Zack Weinberg  wrote:
> On Wed, Oct 13, 2021, at 11:54 AM, Bob Friesenhahn wrote:
>> On Wed, 13 Oct 2021, Zack Weinberg wrote:
>>>
>>> Looks like some kind of problem with automatic ChangeLog generation?
>>
>> To me this appears to be the result of skipping an important step in
>> what should be the process.  It seems that there should be a
>> requirement that 'make distcheck' succeed prior to a commit.  Then
>> this issue would not have happened.
>
> Well, yes, but how do I _fix_ it? :-)
>
> <.< I _may_ be investigating the possibility of setting up CI for automake
> so that problems like this are at least noticed in a timely fashion.  But if
> the tree is broken to begin with, it's troublesome...

I think this happened because your CI system has done a shallow clone.
So the changelog generation failed because the git log is incomplete.

I have no trouble running "make dist" from a normal git checkout.

Cheers,
  Nick



Re: generated lex/yacc sources?

2021-09-21 Thread Nick Bowler
On 21/09/2021, Karl Berry  wrote:
> Suppose I want to generate a lex or yacc input file from another file,
> e.g., a CWEB literate program. Is there a way to tell Automake about
> this so that the ultimately-generated parser/lexer [.ch] files are saved
> in srcdir, as happens when [.ly] are direct sources, listed in *_SOURCES?
>
> I should know the answer to this, but sadly, I don't. I couldn't find
> any hints in the manual or sources or online, although that probably
> only indicates insufficient searching.

I think all that should be needed is to list the .l (or .y) file in
_SOURCES normally, then just write a suitable make rule to update
it from the literate sources.  Automake doesn't "know" about it but
make should do the right thing.  For example, this seems to work OK:

  % cat >Makefile.am <<'EOF'
bin_PROGRAMS = main

main_SOURCES = main.l

# for simplicity, keep distributed stuff in srcdir
$(srcdir)/main.l: $(srcdir)/main.x
cp $(srcdir)/main.x $@

EXTRA_DIST = main.x
MAINTAINERCLEANFILES = main.l
EOF

Cheers,
  Nick



Re: How to prevent distribution of `texinfo.tex`

2021-06-23 Thread Nick Bowler
On 2021-06-23, Werner LEMBERG  wrote:
 Yeah, it would be nice to have a means to control that.
>>
>> Yes it is really not a good solution in this case.  The file is
>> detected at "automake" time and the rule to distribute texinfo.tex
>> is baked into the generated Makefile.in.  That then gets bundled up
>> into the tarball.
>
> Yep.  The size of `texinfo.tex` (374kByte) is not significant today,
> but if there is no Texinfo documentation in a project it is completely
> pointless to include it.
>
> I vote for removing this file from the list of mandatory files.

It's not mandatory.  It only gets included when the file is present in
your development workspace (presumably by some mistake?) when you run
automake.  I don't know how you ended up with it there in the first
place but simply delete texinfo.tex from your workspace and re-run
automake: now it won't be included in the distribution (problem solved?).

I'm afraid I'm not really understanding why this is an issue.

Removing anything from this list will just break any projects that
depend on the current behaviour when they switch to a new version
of Automake, probably in subtle and hard-to-notice ways.

Cheers,
  Nick



Re: How to prevent distribution of `texinfo.tex`

2021-06-23 Thread Nick Bowler
On 23/06/2021, Peter Johansson  wrote:
>
> On 24/6/21 3:02 am, Werner LEMBERG wrote:
>>> As far as I know there is no way to disable this behaviour, although
>>> I agree the automagic file inclusion can be a bit funky.
>> Yeah, it would be nice to have a means to control that.
>
> There is the dist hook, which can be used to remove files from the
> distribution, but seems dangerous to exclude files from the tarball that
> are mentioned in the Makefile.

Yes it is really not a good solution in this case.  The file is detected
at "automake" time and the rule to distribute texinfo.tex is baked into
the generated Makefile.in.  That then gets bundled up into the tarball.

If you simply delete texinfo.tex in a dist-hook rule, users will not
be able to run "make dist" from the tarball.  This is one of the things
tested by "distcheck" so that should hopefully catch it.

My suggestion, if accidental inclusion of texinfo.tex is really a
serious problem, is to use distcheck-hook and simply fail the check
if texinfo.tex is present in distdir.

Cheers,
  Nick



Re: How to prevent distribution of `texinfo.tex`

2021-06-23 Thread Nick Bowler
On 2021-06-23, Werner LEMBERG  wrote:
> The file `texinfo.tex` is in the list of files (given by `automake
> --help`) that gets always distributed.  How can I disable this?  I
> don't have texinfo documentation.

The texinfo.tex file (and others listed along with it) is included in
the distribution only if the file is present in your project workspace
when you run "automake" to (re)generate Makefile.in.

As far as I know there is no way to disable this behaviour, although I
agree the automagic file inclusion can be a bit funky.

Cheers,
  Nick



Re: parallel build issues

2021-06-21 Thread Nick Bowler
On 2021-06-21, Werner LEMBERG  wrote:
>
>> The problem is not related to the snippet you posted.  The
>> concurrent recursive make invocations are being spawned from
>> somewhere else in your build system.
>
> The `Makefile.am` file one level higher is as follows.
>
>   ACLOCAL_AMFLAGS = -I gnulib/m4 -I m4
>
>   SUBDIRS = gnulib/src \
> lib \
> frontend \
> doc
>   EXTRA_DIST = bootstrap \
>bootstrap.conf \
>FTL.TXT \
>gnulib/m4/gnulib-cache.m4 \
>GPLv2.TXT \
>README \
>TODO \
>.version
>
>   BUILT_SOURCES = .version
>   .version:
> echo $(VERSION) > $@-t && mv $@-t $@
>
>   dist-hook:
> echo $(VERSION) > $(distdir)/VERSION.TXT
>
> Looks pretty standard to me, but maybe I'm wrong.

Nothing shown here is going to cause this problem.  But with recursive
build problems it is insufficient to just look at just one makefile:
the problematic make invocations could be coming from anywhere in your
project.

For example, perhaps you have the same "frontend" directory listed also
in SUBDIRS for some other unrelated makefile?  That is probably the
simplest way this situation could happen.

Or perhaps the parent directory's makefile was itself being processed
by concurrent recursive invocations, which then results in independent
recursive invocations in the subdirectories.

If you can't find anything by a code inspection GNU make has some debug
features which may help visualize what is going on.

Cheers,
  Nick



Re: parallel build issues

2021-06-21 Thread Nick Bowler
On 2021-06-21, Warren Young  wrote:
> On Jun 21, 2021, at 11:49 AM, Werner LEMBERG  wrote:
>>
>>  bin_PROGRAMS += ttfautohintGUI
>
> Is Automake smart enough to realize what you’ve done there?

This is not a problem.  Automake interprets this assignment syntax
(and outputs a single assignment to bin_PROGRAMS in Makefile.in).

Cheers,
  Nick



Re: parallel build issues

2021-06-21 Thread Nick Bowler
On 2021-06-21, Werner LEMBERG  wrote:
> I have a `Makefile.am` in a `frontend` subdirectory that looks like
> the following (abridged).
[...]
> Running the generated `Makefile` with `make -j12`, I get this:
>
>   ...
>   Making all in frontend
>   make[2]: Entering directory '.../frontend'
>   ...
>   make  all-am
>   make[3]: Entering directory '.../frontend'
> CXX  info.o
> CXX  main.o
> CXX  ttfautohintGUI-ddlineedit.o
> CXX  ttfautohintGUI-info.o
> CXX  ttfautohintGUI-main.o
> CXX  ttfautohintGUI-maingui.o
> CXX  ttfautohintGUI-ttlineedit.o
>   make  ttfautohint
> CXX  ttfautohintGUI-ddlineedit.moc.o
> CXX  ttfautohintGUI-maingui.moc.o
> CXX  ttfautohintGUI-static-plugins.o
> CXX  ttfautohintGUI-ttlineedit.moc.o
>   make[4]: Entering directory '.../frontend'

Here one of your make rules has started a recursive build in the
"frontend" directory.

> CXX  info.o
>   make  ttfautohintGUI
>   make[4]: Entering directory '.../frontend'

Here *another* one of your make rules has *also* started a recursive
build in "frontend", concurrently with the one that was started
previously.

> CXXLDttfautohintGUI
> CXX  main.o
>   mv: cannot stat '.deps/info.Tpo': No such file or directory

Those independent make invocations are simultaneously running the same
compilation rules and have no knowledge of what the other make processes
are doing.  Failure is the inevitable outcome.

>   make[4]: *** [Makefile:1401: info.o] Error 1
>   make[4]: *** Waiting for unfinished jobs
> CXXLDttfautohint
>   make[4]: Leaving directory '.../frontend'
>   ...
>
> What am I doing wrong?  I would be glad for any pointers.

The problem is not related to the snippet you posted.  The concurrent
recursive make invocations are being spawned from somewhere else in
your build system.

Hope that helps,
  Nick



Re: config.sub/config.guess using nonportable $(...) substitutions

2021-03-09 Thread Nick Bowler
On 09/03/2021, Warren Young  wrote:
> On Mar 9, 2021, at 1:26 PM, Paul Eggert  wrote:
>>
>>> 1) There is no actual benefit to using $(...) over `...`.
>>
>> I disagree with that statement on technical grounds (not merely cosmetic
>> grounds), as I've run into real problems in using `...` along with " and
>> \,
>
> Me too, plus nesting.  The difference is most definitely not cosmetic.

I think what Karl means is that it is usually very easy to portably work
around the problems of nested and/or quoted `...` substitutions (usually
by just using a variable).

In other words, the difference between a script using $(...) and an
equivalent, more portable script using `...` is only one of appearance.

Regardless, there are no quoted or nested substitutions whatsoever in
config.sub.  I see exactly one nested substitution in config.guess, and
just a handful of quoted ones.  None appear particularly challenging to
write portably.

> Autoconf came out in 1991, so it’s the equivalent of supporting Version 6
> Unix (1975) in the original release, which it probably didn’t do, given that
> the Bourne shell didn’t even exist at that point.
>
> Are the malcontents not expecting heroic levels of backwards compatibility
> that Autoconf never has delivered?

No, I'm just expecting that things are not broken gratuitously in core
portability tools because someone does not like the appearance of the
more portable syntax.

I _especially_ don't expect this kind of breakage when upgrading from one
Automake point release to another (1.16.1 to 1.16.3).

Cheers,
  Nick



Re: config.sub/config.guess using nonportable $(...) substitutions

2021-03-08 Thread Nick Bowler
On 2021-03-08, Tim Rice  wrote:
> On Mon, 8 Mar 2021, Nick Bowler wrote:
[...]
>> These scripts using $(...) are incorporated into the recently-released
>> Automake 1.16.3, which means they get copied into packages bootstrapped
>> with this version.  So now, if I create a package using the latest bits,
>> configuring with heirloom-sh fails:
>>
>>   % CONFIG_SHELL=/bin/jsh jsh ./configure CONFIG_SHELL=/bin/jsh
>>   configure: error: cannot run /bin/jsh ./config.sub
>
> But why would you use CONFIG_SHELL= to specify a less capable shell?
> It is there to specify a more capable shell in case it is not already
> detected.

It is simply a proxy to test Solaris /bin/sh behaviour using a modern
GNU/Linux system.  This is much easier and faster than actually testing
on old Solaris systems and, more importantly, anyone can download and
install this shell as it is free software and reasonably portable.

Obviously I can successfully run my scripts on GNU/Linux using a modern
shell such as GNU Bash.  But that's not the point: Autoconf and friends
are first and foremost portability tools.  For me the goal is that this
should be working anywhere that anyone might reasonably want to run it.

But right now, it seems these portability tools are actually *causing*
portability problems, rather than solving them.  From my point of view
this is a not so great situation.

Cheers,
  Nick



config.sub/config.guess using nonportable $(...) substitutions

2021-03-08 Thread Nick Bowler
Hi,

I noticed that config.sub (and config.guess) scripts were very recently
changed to use the POSIX $(...) form for command substitutions.

This change is, I fear, ill-advised.  The POSIX construction is
widely understood to be nonportable as it is not supported by
traditional Bourne shells such as, for example, Solaris 10 /bin/sh.
This specific portability problem is discussed in the Autoconf manual
for portable shell programming[1].

These scripts using $(...) are incorporated into the recently-released
Automake 1.16.3, which means they get copied into packages bootstrapped
with this version.  So now, if I create a package using the latest bits,
configuring with heirloom-sh fails:

  % CONFIG_SHELL=/bin/jsh jsh ./configure CONFIG_SHELL=/bin/jsh
  configure: error: cannot run /bin/jsh ./config.sub

  % jsh config.sub x86_64-pc-linux-gnu
  config.sub: syntax error at line 53: `me=$' unexpected

(The heirloom-sh is essentially Solaris /bin/sh but runs on GNU/Linux systems).

What was the motivation for this change?  Backquotes work fine and are
more portable.  Can we just revert it so the script works again with
traditional shells?  Surely these scripts should be maximally portable,
I would think?

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/autoconf.html#index-_0024_0028commands_0029

Cheers,
  Nick



Re: RFC: Bump minimum Perl to 5.18.0 for next major release of both Autoconf and Automake

2021-02-18 Thread Nick Bowler
On 2021-02-18, Karl Berry  wrote:
> I think the right thresholds are 5.10 for absolute minimum and 5.16
> for 'we aren't going to test with anything older than this'
>
> I appreciate the effort to increase compatibility with old versions.
>
> I imagine you could provide Digest::SHA "internally", or test for it as
> Nick suggested, but I know how much of a pain it is to avoid/check for
> use of things that have seemingly been around forever. (Comes up all the
> time in the TeX world.)

Just to clarify, I was not suggesting that any kind of test is needed
before going ahead and using this module.  If there is a good reason
to use the module in Autoconf, as far as I'm concerned we should just
go ahead and use it.

I was just pointing out that requiring this module in Autoconf does not,
by itself, imply requiring perl 5.10, as the module may be available on
older installations too.

The reason for failures due to a missing module like this will be
obvious immediately.  A configure test may be _nice_ but probably
just extra work that is not really needed.

Cheers,
  Nick



Re: RFC: Bump minimum Perl to 5.18.0 for next major release of both Autoconf and Automake

2021-02-18 Thread Nick Bowler
Hi Zack,

On 2021-02-17, Zack Weinberg  wrote:
> On Fri, Jan 29, 2021 at 5:54 PM Karl Berry  wrote:
>> But, I think it would be wise to give users a way to override the
>> requirement, of course with the caveat "don't blame us if it doesn't
>> work", unless there are true requirements such that nothing at all would
>> work without 5.18.0 -- which seems unlikely (and undesirable, IMHO).
>> 2013 is not that long ago, in autotime.
>
> This is a reasonable suggestion but Perl makes it difficult.
[...]
> What we could do is something like this instead:
>
>use 5.008;  # absolute minimum requirement
>use if $] >= 5.016, feature => ':5.16';  # enable a number of
> desirable features from newer perls
>
> + documentation that we're only _testing_ with the newer perls.

FWIW, I just checked and I do currently build an Autotest testsuite
on a system where "perl" is perl 5.8.3, which works on autoconf-2.69.

So I suppose if Autoconf required a newer version, and I required a
newer version of Autoconf, then this is a problem.  But due to the
nature of Autoconf this is exclusively my problem and does not impact
downstream users at all.  So I'd just solve the problem (perhaps by
running autom4te on an updated setup) and wouldn't be bothered if
things are broken for a reason.

Only testing with new(ish) perl versions is not at all a problem IMO.
Interoperability is always "best effort": nobody can test every possible
system configuration.  As long as we don't claim to support systems
that are never ever tested, people who care about particular systems
just have to speak up when things stop working.

> I did some more research on perl's version history (notes at end) and
> I think the right thresholds are 5.10 for absolute minimum and 5.16
> for 'we aren't going to test with anything older than this'.  5.10 is
> the oldest perl that shipped Digest::SHA, which I have a specific need
> for in autom4te;

... on the topic of of reasons to break things, the perl 5.8 installation
in question does seem to have Digest::SHA available to it.  So for this
dependency I would suggest Autoconf should be following the Autoconf
philosophy and "you must have the Digest::SHA perl module" is different
from "you must have perl version 5.10 or newer".

> it is also the oldest perl to support `state` variables and the `//`
> operator, both of which could be quite useful.

However these new syntactic constructs are obviously unavailable.
I think "//" is not a great reason (by itself) to break compatibility
but "state" could be.

Cheers,
  Nick



Re: DIST_COMMON

2021-02-17 Thread Nick Bowler
On 2021-02-17, Leo Butler  wrote:
> I cannot find DIST_COMMON documented in the automake manual[*]. Is this
> intended or an oversight?

Most likely intentional, this looks pretty internal to the "make dist"
machinery and not meant to be used directly by package authors.

> Looking at the automake perl script doesn't really enlighten me,
> either.
>
> I would like to know:
>
> -what does DIST_COMMON contain by default?

Looks to me like it is set to the list of files that Automake will
package by "make dist" that aren't otherwise explicitly listed in
Makefile.am.

So, loosely speaking, it should contain all[1] the files that Autoconf
used to produce "configure" plus all the files that Automake used to
produce "Makefile.in", plus the outputs of those processes, plus a
few other files that get automatically distributed such as the various
GNU-standard files like ChangeLog.  Probably some other things too.

[1] the ones Automake knows about, anyway.

> -is it possible to set it or otherwise over-ride it in Makefile.am?

Techncally the answer to this question is "yes".

Automake allows pretty much anything that it generates to be overridden
by Makefile.am, including DIST_COMMON: just include an assignment to
DIST_COMMON in Makefile.am, setting it to whatever you desire, and it
will suppress the assignment generated by Automake.

However, while this is possible, overriding Automake-internal definitions
is not generally recommended.  If you need to tweak this, consider using
a dist-hook rule instead.

Cheers,
  Nick



Re: bug#45756: Prepending '+' to the recipe line when linking with GCC's -flto=jobserver

2021-02-04 Thread Nick Bowler
On 2021-01-09, R. Diez  wrote:
> [Resending in hopes it will attach to the new bug. --karl]
>
>> [...]
>> At any rate, it would be extremely helpful to have a minimal-as-possible
>> runnable (automake-able) example showing the case where the + needs to
>> be prepended. rdiez, can you create such a mini-project?
>  > [...]
>
> I normally use Autoconf, and I do not understand very much the separation
> between Autoconf and Automake. I do not know who is responsible for the
> generation of the makefile rules to link the executable. Either Autoconf or
> Automake must decide that GCC is not just used for compiling each object
> file, but also for linking, and that rule is not visible in the makefile.am
> file.
>
> Of course, such a linking rule does not user $(MAKE), and there is no '+'
> prefix, so the GNU Make jobserver file descriptors will not be passed to
> child processes. This is documented in the GNU Make manual.
>
> You do not need any special demo project for this. Just take any existing
> Automake project written in C or C++, and use these compilation flags in
> configure.ac :
>
> AM_CFLAGS="-flto=jobserver"
> AM_CXXFLAGS="-flto=jobserver"
>
> If you run the makefile with "make -j 2", GCC will receive environment
> variable MAKEFLAGS with a setting like "--jobserver-fds=xxx", but GNU Make
> will
> close the file descriptors mentioned there before executing the rule and
> running GCC. This issue is not visible in GCC yet due to this bug I
> reported:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94330
>
> I am not sure how I can demonstrate this in a project, there is not actually
> much to demonstrate.
>
> This is not just an issue while linking. Like I said, any stage, including
> compilation of a single object file, could use the GNU Make jobs server. So
> there should be a global option in Autoconf or Automake to prepend a '+' to
> all generated rules.

This issue has come up from time to time.  I think I wrote something on
it recently.  think everyone can agree that a solution to this problem
is desirable.

However simply prepending "+" to commands is not practical for Automake
to do because "+" has way more effects than just keeping the jobserver
fds open.  In particular, it will completely break "make -n".

A configure option to allow the user to enable this (rather than an
automake option) would probably be a simple and acceptable way to
get things at leas working, even if it's not an "ideal" solution.

Another possibility is for a "+"-prefixed command to check MAKEFLAGS
to see if options like -n that suppress command execution were used
(Automake already has to do this sort of thing in some rules).

Since I believe the jobserver feature is exclusive to GNU make, I
imagine it would also be possible to make use of GNU make substitution
features to only add the "+" when make options that suppress command
execution are omitted.  This could probably be done in a manner that is
interoperable with other make implementations and would likely perform
better than shell tests inside commands.

Finally this issue could also probably be solved by changing GNU make
itself: providing another mechanism to keep jobserver fds open in rules.

Cheers,
  Nick



Re: Automake's file locking

2021-02-03 Thread Nick Bowler
On 2021-02-03, Bob Friesenhahn  wrote:
> GNU make does have a way to declare that a target (or multiple
> targets) is not safe for parallel use.  This is done via a
> '.NOTPARALLEL: target' type declaration.

According to the manual[1], prerequisites on .NOTPARALLEL target are
ignored and this will simply disable parallel builds completely for
the entire Makefile.  I did a quick test and the manual seems to be
accurate about this.

Order-only prerequisites can be used to prevent GNU make from running
specific rules in parallel.  These are more difficult (but not impossible)
to declare in an interoperable way.

[1] https://www.gnu.org/software/make/manual/make.html#index-_002eNOTPARALLEL

Cheers,
  Nick



Re: Automake's file locking (was Re: Autoconf/Automake is not using version from AC_INIT)

2021-01-28 Thread Nick Bowler
On 2021-01-28, Zack Weinberg  wrote:
> There is a potential way forward here.  The *only* place in all of
> Autoconf and Automake where XFile::lock is used, is by autom4te, to
> take an exclusive lock on the entire contents of autom4te.cache.
> For this, open-file locks are overkill; we could instead use the
> battle-tested technique used by Emacs: symlink sentinels.  (See
> https://git.savannah.gnu.org/cgit/emacs.git/tree/src/filelock.c .)
>
> The main reason I can think of, not to do this, is that it would make
> the locking strategy incompatible with that used by older autom4te;
> this could come up, for instance, if you’ve got your source directory
> on NFS and you’re building on two different clients in two different
> build directories.  On the other hand, this kind of version skew is
> going to cause problems anyway when they fight over who gets to write
> generated scripts to the source directory, so maybe it would be ok to
> declare “don’t do that” and move on.  What do others think?

I think it's reasonable to expect concurrent builds running on different
hosts to work if and only if they are in different build directories and
no rules modify anything in srcdir.  Otherwise "don't do that."

If I understand correctly the issue at hand is multiple concurrent
rebuild rules, from a single parallel make implementation, are each
invoking autom4te concurrently and since file locking didn't work,
they clobber each other and things go wrong.

I believe mkdir is the most portable mechanism to achieve "test and set"
type semantics at the filesystem level.  I believe this works everywhere,
even on old versions of NFS that don't support O_EXCL, and on filesystems
like FAT that don't support any kind of link.

The challenge with alternate filesystem locking methods compared to
proper file locks is that you need a way to recover when your program
dies before it can clean up its lock files or directories.

Could the issue be fixed by just serializing the rebuild rules within
make?  This might be way easier to do.  For example, we can easily
do it in NetBSD make:

  all: recover-rule1 recover-rule2
  clean:
rm -f recover-rule1 recover-rule2

  recover-rule1 recover-rule2:
@echo start $@; sleep 5; :>$@; echo end $@

  .ORDER: recover-rule1 recover-rule2

Heirloom make has a very similar mechanism that does not guarantee
relative order:

  .MUTEX: recover-rule1 recover-rule2

Both of these will ensure the two rules are not run concurrently by a
single parallel make invocation.

GNU make has order-only prerequisites.  Unlike the prior methods, this
is trickier to do without breaking other makes, but I have used a method
like this one with success:

  # goal here is to get rule1_seq set to empty string on non-GNU makes
  features = $(.FEATURES) # workaround problem with old FreeBSD make
  orderonly = $(findstring order-only,$(features))
  rule1_seq = $(orderonly:order-only=|recover-rule1)

  recover-rule2: $(rule1_seq)

I don't have experience with parallel builds using other makes.

Cheers,
  Nick



Re: Future plans for Autotools

2021-01-22 Thread Nick Bowler
As always, thanks for all your effort Zack!

I wanted to share some of my thoughts on Autoconf and friends.  Maybe I
wrote too much.

For me the most important requirement of the GNU build system is that
it must be as straightforward as possible for novice users to build free
software packages from source code, with or without local modification.

This is what empowers users with the benefits of free software.  If
users are unable to build or modify the software that they use, they
are unable to take advantage of those benefits.

For me, every other consideration is secondary.

The interface consistency prescribed by the GNU coding standards goes
a long way: you learn the steps for one package and can apply that
knowledge to almost any other package.

The trend towards requiring everyone to build from VCS snapshots
and requiring zillions of specific versions of various build tools
is concerning.  Unfortunately I think many developers don't really
care about the user experience when it comes to building their software
releases from source.

This brings me to another important strength of the GNU Build System: if
I prepare a package today I want to be confident that people will still
be able to build it 5, 10, 20 or more years from now.

Now obviously we can't predict the future but we can look to past
experience: just today, I unpacked GNU Bison 1.25 (ca. 1996) on a modern
GNU/Linux system, running on a processor architecture and distribution
that didn't even exist back then, and it builds *out of the box*.

Typical issues encountered with old GNU packages are usually very minor
if you have any problems at all.  For a more complex example, I tried
building glib-1.2.10 (ca. 2001).  I had to update config.sub/config.guess
to the latest, set CC='gcc -std=gnu89' (because the code does not work with
C99 inline) and edit one line of code to disable use of an obsolete GNU C
extension (both compilation problems are due to not following the Autoconf
philosophy and using version checks instead of feature checks, oops!)

My general experience with CMake is that you probably can't build any
old packages because whatever version of CMake you have available simply
doesn't understand the package's build scripts, and the version which
could understand them just doesn't work on your system because you have
a newer processor or something.

I don't have enough experience with Meson to say.  Mainstream free
software packages have only very recently started using it.  On the
GNU side, glib-2.60 (ca. 2019) converted to meson and I am able to
build it.  If possible, I will have to try again in 2039.  I bet the
autoconf-based glib-1.2.10 tarball from 2001 will still mostly work,
and so will the 1996 version of GNU Bison.

Cheers,
  Nick



Re: INSTALL_DATA += -p

2020-11-03 Thread Nick Bowler
On 2020-11-03, Thien-Thi Nguyen  wrote:
> I'd like to make sure that timestamps are preserved on "make
> install".

In general, preserving timestamps while copying files cannot be done
reliably and when it is possible, it is difficult to do in a portable
fashion.  But it seems preservation is not really required.

> I found the variable ‘INSTALL_DATA’ but cannot do the
> above (subject line) addition to Makefile.am, because Automake
> interprets INSTALL as a primary and bails out since that is not
> defined as such.

These INSTALL... variables come from configure (via AC_PROG_INSTALL); if
you want to change them you can alter them in your configure script.

It is important to note, however, that Automake's supplied install-sh
script currently does not implement "-p" or any other option to copy
timestamps.

> The background reason is that i am installing .scm and .go files
> (the latter compiled from the former) and the .go files need to
> have a "later" timestamp than the .scm files for Guile to DTRT.

"Later" here means "greater than?"  Or is "greater than or equal to"
acceptable?

> I suppose a workaround is to use an installation hook to simply
> touch(1) the .go files.

An install hook would be my recommendation.

You should touch both files, as some "touch" implementations
truncate timestamps (this means a touched file could potentially
get an earlier timestamp than an untouched one).

Touching both files only ensures "greater than or equal to" timestamps.
If the timestamps must also be different then this is a bit trickier
(call touch in a loop until the timestamp changes).

Cheers,
  Nick



Re: configure: error: cannot find install-sh, install.sh, or shtool in "." "./.." "./../.."

2020-08-02 Thread Nick Bowler
On 2020-08-01, TomK  wrote:
> Thanks very much Karl.  Appreciate this feedback. You've answered alot
> of the lingering questions I had around this topic.  Much appreciated!
>
> Just for some continued open discussion, included some basic answers.
> Not meant to sway to one side or the other, just to understand reasoning
> behind what I see used.
>
> On 7/30/2020 5:05 PM, Karl Berry wrote:
[...]
>> I don't agree with "deprecated". Left quotes must continue to work
>> forever and there is every reason to use them, in shell code that must
>> be maximally portable.
>
> access.redhat.com/solutions/715363
>
> for i in `find /usr/include -type f `
>
> fails on very large results.  $() has a higher results limit.

This is the least of your problems with this construct.  It will also
perform word splitting and pattern expansion on the filenames, which is
almost certainly not intended or desired so if you need to do something
like this in a configure script it is probably better to find a portable
construct which does not have these problems (I suspect such a solution
will not use command substitution at all).

Even if $() was portable it does not improve things very much; consider
the following:

  % mkdir /tmp/cwd
  % echo hello >/tmp/cwd/totally_unrelated_file
  % cd /tmp/cwd
  % mkdir /tmp/test
  % mkdir /tmp/test/not_even_a_file
  % echo hello >'/tmp/test/* *'
  % for i in $(find /tmp/test -type f); do printf '%s\n' "$i"; done
  /tmp/test/* *
  /tmp/test/not_even_a_file
  totally_unrelated_file

Oops...



Re: configure: error: cannot find install-sh, install.sh, or shtool in "." "./.." "./../.."

2020-07-30 Thread Nick Bowler
On 25/07/2020, TomK  wrote:
> Out of curiosity and a bit on another topic.  Is the syntax written
> like this for compatibility reasons with other shells?  Or because it
> could get in the way of the parsers Automake uses?

Autoconf is primarily a portability tool and thus a key goal is for
generated configure scripts to run in pretty much any unix-like
environment that anyone could possibly want to run them on.

Although these days configure expects support for things like shell
functions and comments so you probably won't have much luck on the
original 1979 Bourne shell, but there are a lot of different ksh88
derivatives/workalikes still floating around which were the basis
for POSIX standardization and the default shell on basically every
unix-like environment you're likely to encounter outside of museums.

> Code snippet:
[...]
> In particular:
>
>
> "x$host_alias" != x;
>
>
> I know adding 'x' or another character prevents failures when a variable
> is empty but that's been deprecated for sometime.

I don't know of any shell that gets this wrong with empty variables.
But metacharacters are a problem.  For example, if we write:

  test "$a" != "hello"

then at least some shells, (e.g., heirloom-sh), fail if a="!" or
a="(", etc, even though it is valid according to POSIX:

  $ a='('
  $ test "$a" != "hello"
  test: argument expected
  $ echo $?
  1

Prefixing with a benign (usually "x") string neatly avoids this problem:

  $ test x"$a" != x"hello"
  $ echo $?
  0

It's simple enough to do that usually this pattern is applied everywhere
even in cases where it is not strictly necessary -- I don't need to know
whether or not host_alias can legitimately be set to '!'.

> [] is deprecated in favor of [[]]
> `` is deprecated in favor of $()

[[ ]] is not POSIX compliant, and won't work at all in many modern
shells including dash.

$() is POSIX but unfortunately not widely portable in practice, e.g.,
again in heirloom-sh:

  $ a=$(echo hello)
  syntax error: `a=$' unexpected

Cheers,
  Nick



Re: Installing something nonstandard in $(libdir)

2020-02-07 Thread Nick Bowler
On 2020-02-07, Tom Tromey  wrote:
>> "Zack" == Zack Weinberg  writes:
>
> Zack> Makefile.am:158: error: 'libfoo$(SOEXT).1' is not a standard library
> name
> Zack> Makefile.am:158: did you mean 'libfoo$(SOEXT).a'?
>
> Zack> and lib_DATA is the obvious alternative but that doesn't work either:
>
> Zack> Makefile.am:145: error: 'libdir' is not a legitimate directory for
> 'DATA'
>
> Zack> So, the question is, is there a lib_SOMETHING variable that I can use
> Zack> to install to $(libdir) arbitrary stuff that automake doesn't
> Zack> understand?  If not, is there some other option?
>
> I believe you can work around the checks by providing your own install
> directory variable, like:
>
> myexeclibdir = $(libdir)
> myexeclib_DATA = ...
>
> The "exec" is in the name to ensure that "make install-exec" installs
> these files, see (info "(automake) The Two Parts of Install") for this
> detail.

Nice!

The install-exec versus install-data was actually why I didn't suggest a
similar trick, I had no idea that simply putting "exec" in the directory
variable name has this effect.

Learn something every day!

Cheers,
  Nick



Re: Installing something nonstandard in $(libdir)

2020-02-06 Thread Nick Bowler
Hi Zack,

On 2/6/20, Zack Weinberg  wrote:
> For reasons too complicated to get into here, I have been
> experimenting with building shared libraries in an autoconf+automake
> build *without* using libtool.  [Please do not try to talk me out of
> this.]  I have something that works correctly on ELF-based operating
> systems with GCC, *except* for installation, where automake is
> refusing to do what I want.
[...]
> So, the question is, is there a lib_SOMETHING variable that I can use
> to install to $(libdir) arbitrary stuff that automake doesn't
> understand?  If not, is there some other option?

You can use an install-exec-hook to install your libraries.

Hope that helps,
  Nick



Re: Supporting build rules for grouped targets

2020-01-20 Thread Nick Bowler
On 2020-01-20, Markus Elfring  wrote:
> Variants of the make software support build rules for grouped targets.
>
> Examples:
> * 
> https://www.gnu.org/software/make/manual/html_node/Multiple-Targets.html#Rules-with-Grouped-Targets
> * 
> https://docs.oracle.com/cd/E86824_01/html/E54763/make-1s.html#REFMAN1make-1s-usag
>
> How can feature checks be achieved for such functionality around safer
> management of desired dependencies?

I don't understand exactly what you are asking, but the Automake manual
has a section[1] on how to write portable makefile rules for programs
that produce multiple output files.

I normally use the dedicated witness file and deletion-recovery rules
without locks which is fairly simple and sufficient for most cases.

[1] https://www.gnu.org/software/automake/manual/automake.html#Multiple-Outputs

Cheers,
  Nick



Re: How to install data / lib in more than 1 place?

2019-12-11 Thread Nick Bowler
On 12/11/19, Georg-Johann Lay  wrote:
>> On Tue, 10 Dec 2019, Georg-Johann Lay wrote:
[...]
>>> Will this also work with same file names? Like
>>>
>>> avrfoo_LIBRARIES = libfoo.a
>>>
>>> avrbar_LIBRARIES = libfoo.a
>>>
>>> or would that confuse the tools?
[...]
> It appears to work though, and even if the libs are built multiple
> times, I could reduce the number of Makefile.am's from ~1200 to ~250.

Another option is to use an install hook to copy a file installed in one
location into all the other locations.  Using a hook would also enable you
to e.g., link the files instead of copying them when that is possible.

Cheers,
  Nick



Re: Make -j to Compiler Question

2019-09-22 Thread Nick Bowler
Hello Nick,

On 2019-09-21, Nicholas Krause  wrote:
> I'm currently looking on and continuing the palleraling of gcc. There
> was a discussion about if its possible to link to make -j to split the
> tasks if possible. If so how and what is the easiest way to get this
> info into

Assuming you're talking about interacting with the GNU make jobserver, I
suggest asking this question on the GNU make list[1], after reading the
relevant documentation on the jobserver[2].

There are other make implemetations (e.g. NetBSD make) which support
parallelism but probably use a different method.

[1] https://lists.gnu.org/mailman/listinfo/help-make
[2] https://www.gnu.org/software/make/manual/html_node/POSIX-Jobserver.html

Cheers,
  Nick



Re: BUILT_SOURCES called on `make dist` even if the built sources should not be included in the dist

2019-09-17 Thread Nick Bowler
Hi Jerry,

On 9/17/19, Jerry Lundström  wrote:
> This problem seems to have been introduced in v1.16 with:
>
> - "./configure && make dist" no longer fails when a distributed file
> depends on one from BUILT_SOURCES.
>
> And what I can see in the Makefile output is that $(BUILT_SOURCES) has
> been added to distdir.
>
> I can't really see how this change got approved, isn't the point of
> BUILT_SOURCES to be sources built when building!?  Including them into
> distributions seems wrong.

I'm not sure exactly what the problem you are having is, because ...

[...]
> Here is an example:
>
> ```
> bin_PROGRAMS = test
>
> EXTRA_DIST = ext/TEST
>
> test_SOURCES = main.c
> test_LDADD = ext/built.o
>
> BUILT_SOURCES = ext/built.o
>
> ext/built.o:
>   echo "int sdkjfhskjhfskjd(void){ return 0; }" > ext/built.c
>   gcc -c ext/built.c -o ext/built.o
>
> CLEANFILES = ext/built.c ext/built.o
> ```

... I just ran this example with Automake 1.16.1 and neither ext/built.c
nor ext/built.o are included in the distribution tar file generated by
'make dist'.

So it seems to be working exactly as you wanted it to work.

Cheers,
  Nick



Re: Help with -Werror

2019-04-24 Thread Nick Bowler
Hello,

On 2019-04-24, Phillip Susi  wrote:
> It seems like every time I go back to try to do somoe work on the parted
> sources I run into a failure to compile due to some silly warning or
> other and -Werror being enabled.  This time it is from a generated
> source file made by gperf.  Is this set by default these days in
> automake?  Because I can not figure out how it is being used.
> Makefile.am does not seem to have anything to turn it on.  Makefile sets
> CFLAGS_WERROR=-Werror, but I can see nothing that references
> CFLAGS_WERROR anywhere.  How does this variable end up being passed to
> gcc?
>
> I am tempted to just disable -Werror completely, but at the very least
> it should be disabled for BUILT_SOURCES since you can't really fix the
> warnings there.  Any advice on how to do this?

Automake does not add -Werror to the default C compiler flags, and it
does not do anything with a CFLAGS_WERROR variable.

If that is happening for a particular package, then it is because the
package authors did something to make it happen.  So probably any
questions about it happening in parted should be taken up with the
parted maintainer(s)...

For the most part, -Werror is a developer tool which will only cause
problems for users, so my strong recommendation is that it should
never appear in package releases, but not everybody subscribes to
that philosophy...

Cheers,
  Nick



Re: Is it possible to set the permission bits used by the default install target in a Makefile.am?

2019-03-13 Thread Nick Bowler
Hello Craig,

On 2019-03-13, Craig Sanders  wrote:
> Is it possible to set the permission bits used by the default install
> target in a Makefile.am?
>
> To help try and illustrate what I mean, I present a code snippet from one
> of my Makefie.am files.
>
>>> Begin code snippet >>
>
> gimpdir = ${prefix}
>
> gimp_SCRIPTS = scaleAndSetSize.py \
>ScaleAndSetSizeClass.py
>
> .PHONY: install
> install:
>
> mkdir -p ${prefix}
> ${INSTALL} -m 544 scaleAndSetSize.py ${prefix}
> ${INSTALL} -m 444 ScaleAndSetSizeClass.py ${prefix}
>
> << End code snippet <<
>
> My problem with this code snippet is - I don't like the fact that I have
> overridden the default install target to get the files installed with the
> permission bits set the way I want. Rather, I'd like to have the default
> install target do the install work for me, using permission bits that I
> would like to specify. Does anybody know if this is possible?

Automake uses INSTALL_SCRIPT to install scripts, which is normally provided
by AC_PROG_INSTALL from Autoconf (and is set to INSTALL).  You can set
this explicitly in Makefile.am to something different (or change the
value in configure).

However, that's probably a pain because you want different permissions
for different files.

One option would be to use both xxx_DATA and xxx_SCRIPTS, which are
installed by INSTALL_DATA and INSTALL_SCRIPT, respectively (this is the
only practical difference between xxx_DATA and xxx_SCRIPTS).  You can
then adjust those variables separately as desired.

Alternately you can use install-local[1] instead, to get more flexibility
but without replacing the standard "install" target.  Try to respect
DESTDIR as well, and prefer $(MKDIR_P) over open-coding mkdir -p.
For example (totally untested):

  544_scripts = scaleAndSetSize.py
  444_scripts = ScaleAndSetSizeClass.py

  install-local: install-my-scripts
  install-my-scripts:
$(MKDIR_P) "$(DESTDIR)$(gimpdir)"
$(INSTALL) -m 544 $(544_scripts) "$(DESTDIR)$(gimpdir)"
$(INSTALL) -m 444 $(444_scripts) "$(DESTDIR)$(gimpdir)"
  .PHONY: install-my-scripts

Consider a corresponding uninstall target as well:

  uninstall-local: uninstall-my-scripts
  uninstall-my-scripts:
test ! -d "$(DESTDIR)$(gimpdir)" && cd "$(DESTDIR)$(gimpdir)" && \
  rm -f $(544_scripts) $(444_scripts)
  .PHONY: uninstall-my-scripts

Something like that should be just as good as what you get from the
built-in "install" rule (be sure to test with 'make distcheck').

Hope that helps,
  Nick



Re: Parallel builds with some ordering constraints

2018-12-30 Thread Nick Bowler
On 12/29/18, Kip Warner  wrote:
> On Sat, 2018-12-29 at 16:10 -0500, Nick Bowler wrote:
[...]
>>   all_tests_except_start = test1.log test2.log test3.log test-
>> stop.log
>>   all_tests_except_stop = test-start.log test1.log test2.log
>> test3.log
>>
>>   $(all_tests_except_start): test-start.log
>>   test-stop.log: $(all_tests_except_stop)
>
> [snip]
>
>> Hope that helps,
>
> Almost! The problem is with the last rule you defined because a rule to
> generate test-stop.log would have already been generated by Automake
> and this would override it.

Huh.  That probably means the example in the manual is broken too.

Anyway, the solution is straightforward.  Rules in Makefile.am only
override Automake-supplied rules if they are spelled _exactly_ the same
way.  So you just need to change the spelling in the rule, usually by
using a variable like:

  test_stop_log = test-stop.log
  $(test_stop_log): blah blah blah

and the Automake-generated rule for test-stop.log should be emitted
normally.

Hope that helps,
  Nick



Re: Parallel builds with some ordering constraints

2018-12-29 Thread Nick Bowler
Hello,

On 2018-12-29, Kip Warner  wrote:
> Parallel builds work fine for my build tree of a system daemon I am
> developing. I have unit tests in the form of check_SCRIPTS and
> check_PROGRAMS.
>
> These unit tests, however, can only be partially parallelized because
> there needs to be some ordering constraints.

OK, I am assuming you are using the Automake parallel-tests feature.

> I have a unit test in check_SCRIPTS which starts the daemon via init.d.
> The daemon writes out a pid file. There is another unit test script to
> stop it via init.d.
[...]
> For ensuring the stop daemon script at least runs after the start
> script when make computes the dependency graph, I have a (working)
> hack. I simply make the stop daemon target shell script depend on the
> test log of the start script. This works, but it could break in the
> future with newer Automakes.
>
> But in any event, I still don't know what I can do for the binary
> check_PROGRAMS that test the daemon itself to constrain them to run
> between the former two?

So there's no problem with building the programs, the issue is just in
the execution order of the test cases?  You have one test case which
must run before all other test cases, and one test case which must run
after all other test cases.

The documented method to ensure ordering between two (or more) test
cases in the parallel test harness is to put explicit make prerequisites
between the log files[1], e.g. (totally untested):

  all_tests_except_start = test1.log test2.log test3.log test-stop.log
  all_tests_except_stop = test-start.log test1.log test2.log test3.log

  $(all_tests_except_start): test-start.log
  test-stop.log: $(all_tests_except_stop)

[1] 
https://www.gnu.org/software/automake/manual/automake.html#Parallel-Test-Harness

Hope that helps,
  Nick



Re: Recreate the config files

2018-12-07 Thread Nick Bowler
On 12/7/18, Deepa Ballari  wrote:
> I'm trying to add new options to newlib.I get all different sort of
> errors when I run autoconf,automake..
> How can I recreate the config files and sync with
> automake (GNU automake) 1.15, autoconf (GNU Autoconf) 2.69, libtool
> (GNU libtool) 2.4.6 ?
>
> List of errors:
> 1)newlib/libc$ autoconf
> configure.in:37: error: possibly undefined macro: AM_CONDITIONAL
>   If this token and others are legitimate, please use m4_pattern_allow.
>   See the Autoconf documentation.
> configure.in:71: error: possibly undefined macro: AC_LIBTOOL_WIN32_DLL
> configure.in:72: error: possibly undefined macro: AM_PROG_LIBTOOL

I expect the issue is that you are expected to run aclocal first.  You
can sometimes use autoreconf which knows how to run several of the GNU
build tools in the right sequence.

However, the newlib project may have additional requirements to
bootstrap their build system, as many projects use additional functions
outside of the basic GNU set.  This would presumably be documented
somewhere in the newlib documentation, (a lot of projects have a script
to do this, sometimes called 'bootstrap').

You will likely get better help on the newlib mailing list.

Cheers,
  Nick



Re: _SOURCES files in sub-directories lead to make distdir failure

2018-01-24 Thread Nick Bowler
Hi,

On 1/24/18, netfab  wrote:
> Into that project, there's a subdirectory to build a library using
> libtool-2.4.6. The source code of this library is organized into
> sub-directories, like this :
>>  mylib/makefile.am
>>  mylib/aaa.cpp
>>  mylib/aaa.h
>>  mylib/foo/bbb.cpp
>>  mylib/foo/bbb.h
>>  mylib/bar/ccc.cpp
>>  mylib/bar/ccc.h

Looks fine so far.

> The makefile.am for this lib contains :
>> libmyLIB_la_SOURCES = \
>>  aaa.cpp aaa.h \
>>  foo/bbb.cpp foo/bbb.h \
>>  bar/ccc.cpp bar/ccc.h

This looks fine too.

> I'm initializing automake with :
>> AM_INIT_AUTOMAKE([subdir-objects])

Also fine.

> When building the whole project, it works fine.
> However, when running :
>> make distcheck
>
>
> Is fails like following, and I don't see how to fix this :
>> make[5]: Entering directory '/path/to/project/build/src/lib/mylib'
>> make[5]: *** No rule to make target 'foo/bbb.h ', needed by 'distdir'. Stop.
>
> Any advice ? Thanks.

The thing that distcheck is testing here is that the package can be
built with separate source and build directories.  It appears that this
functionality is broken in your package, and distcheck is notifying you
of this fact.

Since you are hitting this problem now, you have probably only been
testing in-tree builds until this moment.

Unfortunately your provided snippets are not complete working code, and
I expect the error involves part of the code you have not shown us.

Cheers,
  Nick



Re: [PATCH] "make dist" did not depend on $(BUILT_SOURCES)

2017-11-28 Thread Nick Bowler
On 2017-11-28 18:13 -0800, Jim Meyering wrote:
> On Tue, Nov 28, 2017 at 12:45 PM, Nick Bowler  wrote:
> > The Automake manual unequivocally states that BUILT_SOURCES files are
> > generated only when running 'make all', 'make check' or 'make install'.
> >
> > So if they are going to be generated on 'make dist' as well, then the
> > manual needs a corresponding update.
> 
> Hi Nick,
> Thanks for the suggestion, but I do not think it is desired. "make
> dist" is already defined as building everything that goes into the
> distribution tarball, and that implies it must also build anything
> (e.g., from BUILT_SOURCES) that happens to be required to do that.

I agree that it *should* but not that it *must*, because BUILT_SOURCES
explicitly (by design) bypasses the usual prerequisite mechanisms.

If you use normal prerequisites instead of BUILT_SOURCS everything
works just fine wrt. distribution, of course, and is the approach I
would personally recommend in all cases.

> Perhaps more importantly, this is an implementation detail that feels
> like it should not be made part of the contract that the documentation
> provides ...

But now with the change applied, as it stands the documentation is
simply wrong.  For example, this passage from the manual (§9.4 Built
Sources):

  "... BUILT_SOURCES is honored only by ‘make all’, ‘make check’ and
  ‘make install’."

is no longer true.  This error can be corrected without explicitly
documenting the new behaviour, for example by making the list of
targets non-exhaustive in nature.

Perhaps something like:

  ... BUILT_SOURCES is honored only by certain targets, including ‘make
  all’, ‘make check’ and ‘make install’.

Although not mentioning distribution at all means that someone reading
this section is left to figure out for themselves if and how these two
Automake features work together...

> ... in case some day automake tightens up "make dist" so it builds
> only those BUILT_SOURCES files that are actually required to build
> the tarball components.

There is need to worry about this ever happening, because computing
such a subset of BUILT_SOURCES is impossible in general.

Cheers,
  Nick



Re: [PATCH] "make dist" did not depend on $(BUILT_SOURCES)

2017-11-28 Thread Nick Bowler
Hi Jim,

On 2017-11-28 11:21 -0800, Jim Meyering wrote:
> Date: Thu, 20 Mar 2014 12:31:32 -0700
> Subject: [PATCH] "make dist" did not depend on $(BUILT_SOURCES)
> 
> * lib/am/distdir.am (distdir-am): New intermediate target.
> Interpose this target between $(distdir) and its dependency
> on $(DISTFILES), so that we can ensure $(BUILT_SOURCES) are
> all created before we begin creating $(DISTFILES).
[...]
>  NEWS   |  3 +++
>  lib/am/distdir.am  |  7 --
>  t/dist-vs-built-sources.sh | 56 
> ++
>  t/list-of-tests.mk |  1 +
>  4 files changed, 65 insertions(+), 2 deletions(-)
>  create mode 100644 t/dist-vs-built-sources.sh

The Automake manual unequivocally states that BUILT_SOURCES files are
generated only when running 'make all', 'make check' or 'make install'.

So if they are going to be generated on 'make dist' as well, then the
manual needs a corresponding update.

Otherwise this looks like a useful improvement.

Cheers,
  Nick



Re: Finding #includes from a yacc .y.

2017-11-20 Thread Nick Bowler
Hi Ralph,

On 2017-11-20, Ralph Corderoy  wrote:
>> It seems wrong for foo.y to have to `#include
>> "path/from/root/to/bar.h" since that means it has to alter if they
>> move around the hierarchy.  Is there another way?
>
> I can be more precise having dug into this project a bit.
> Currently, it has
>
> sbr_libmh_a_CPPFLAGS = ${AM_CPPFLAGS} -I./sbr

This relative include path refers to the current working directory
of the compiler, which is normally the build directory and is thus
essentially equivalent to -I$(builddir)/sbr ...

> Would it be wrong or a misuse of top_srcdir to change that to
>
> sbr_libmh_a_CPPFLAGS = ${AM_CPPFLAGS} -I$(top_srcdir)/sbr

... so if your headers are in the source directory, as is typical,
then something like this is perfectly sensible.

Cheers,
  Nick



Re: warning: TEST_LDFLAGS' is defined but no program or library has 'TEST' as canonical name

2017-11-20 Thread Nick Bowler
Hi,

On 2017-11-20, Thomas Martitz  wrote:
> here's some quite annoying warning. I'm trying to define a variable
> TEST_LDFLAGS that multiple programs use. There is no program named TEST.
> The same works fine with TEST_CFLAGS (i.e. no warning is displayed).
>
> Here's the warning:
>
> Makefile.am:4: warning: variable 'TEST_LDFLAGS' is defined but no program
> or
> Makefile.am:4: library has 'TEST' as canonical name (possible typo)

I'm surprised there is no warning with CFLAGS; it appears this warning
is issued for mumble_SOURCES, LIBADD, LDADD, LDFLAGS and DEPENDENCIES
only.

> Here's the Makefile.am
>
> TEST_CFLAGS = -g
> TEST_LDFLAGS = -Wl,-z,defs
>
> bin_PROGRAMS = test
>
> test_CFLAGS = $(TEST_CFLAGS)
> test_LDFLAGS = $(TEST_LDFLAGS)
>
> Is this known? Is there a workaround? Can I ignore the warning?

If you were to later add a program called TEST, then the results could
be surprising.  But you can certainly ignore the warning if you'd like.

Alternately you can perhaps use a different name that does not conflict
with the Automake naming structure.  Perhaps LDFLAGS_FOR_TEST?

You can disable the warning outright with -Wno-syntax (but this might
disable more than you'd like).

Finally, this warning is not issued for variables substituted by configure.

Cheers,
  Nick



Re: Per-Object Flags for Autotool C++ library?

2017-11-04 Thread Nick Bowler
On 11/4/17, Jeffrey Walton  wrote:
> On Sat, Nov 4, 2017 at 3:56 PM, Nick Bowler  wrote:
>>> EXTRA_libcryptopp_la_DEPENDENCIES listing the objects worked for Linux
>>> and OS X, but not Solaris. For Solaris I needed to drop the leading
>>> `EXTRA`, and use just `libcryptopp_la_DEPENDENCIES`.
>>
>> Is that just because you happen to be running an antique version of
>> Automake on the Solaris machine?
>
> Well, I'm not sure. Is this considered old:
>
> $ automake --version
> automake (GNU automake) 1.11.2

Well, it's coming up on its 6th birthday :)

> One of our driving principles is "things just work". We don't want
> library users inconvenienced or installing extra software. They should
> be able to sit down at their computer, run configure, and everything
> should work as expected.
>
> If something does not work as expected then it becomes our problem.
> We are expected to find workarounds so library users are not
> inconvenienced.

Library users shouldn't be running Automake at all, because when you
distribute a package all of the generated files are included (and do
not depend on Automake).

e.g., if you create your package with the latest versions then it
should "just work" on Solaris.

Cheers,
  Nick



Re: Per-Object Flags for Autotool C++ library?

2017-11-04 Thread Nick Bowler
Hello,

> EXTRA_libcryptopp_la_DEPENDENCIES listing the objects worked for Linux
> and OS X, but not Solaris. For Solaris I needed to drop the leading
> `EXTRA`, and use just `libcryptopp_la_DEPENDENCIES`.

Is that just because you happen to be running an antique version of
Automake on the Solaris machine?

Cheers,
  Nick



Re: Per-Object Flags for Autotool C++ library?

2017-11-03 Thread Nick Bowler
On 11/3/17, Jeffrey Walton  wrote:
> On Thu, Nov 2, 2017 at 6:04 PM, Jeffrey Walton  wrote:
>> I'm working on adding Autotools to a C++ library and test program. My
>> Automake.am has:
>>
>> 
>> ...
>
> I believe I applied Nick and Mathieu correctly. The project is
> available at https://github.com/noloader/cryptopp-autotools . It
> includes the six Git commands to duplicate the issue.
>
> The new issue is, the compile stops after about 4 files are compiled.
> Here's the pastebin of `make V=1`: https://pastebin.com/nCYN2RHh. The
> error is also shown below.
>
> The linker is invoked for reasons unknown to me at the moment. Most of
> the objects are missing. I even deleted the project's directory and
> re-cloned to ensure they were not old artifacts hanging around.

For whatever reason it appears that the generated makefile has missing
prerequisites for libcryptopp.la.  I would expect everything listed in
LIBADD to end up as a prerequisite of the library.  This might require
some investigation to find out why that apparently did not happen in
your case.

Adding everything to EXTRA_libcryptopp_la_DEPENDENCIES might help as
a workaround, e.g.,

  EXTRA_libcryptopp_la_DEPENDENCIES = $(libcryptopp_la_LIBADD)

But this (or equivalent) should have happened automatically.

> I have no idea why a C compiler is being invoked in some places. I
> took great care to ensure Autoconf knew this was a C++ project, and
> not a C project. That's another problem I've been searching for an
> answer for.

It seems it decided to link the library using the C compiler because no
source files are specified for the library.  There may be (or should be)
a way to force it one way or the other, but an obvious workaround is to
specify at least one C++ source file in libcryptopp_la_SOURCES (could be
one of the real files or just a stub).  The _SOURCES objects will appear
earlier on the linker command line than any of the _LIBADD objects.

Cheers,
  Nick



Re: Per-Object Flags for Autotool C++ library?

2017-11-02 Thread Nick Bowler
Hi Jeffrey,

On 11/2/17, Jeffrey Walton  wrote:
> I'm working on adding Autotools to a C++ library and test program. My
> Automake.am has:
>
> lib_LTLIBRARIES = \
>libcryptopp.la
>
> libcryptopp_la_SOURCES = \
>cryptolib.cpp \
>cpu.cpp \
>integer.cpp \
>
>...
>
> cpu.cpp needs additional flags to enable ISAs on IA-32, Aarch64 and
> Power7/Power8. According to
> https://www.gnu.org/software/automake/manual/html_node/Per_002dObject-Flags.html,
> I need to add an additional library:
>
> CPU_FLAG = -msse2 -msse3 -mssse3
> libcpu_a_SOURCES = cpu.cpp
> libcpu_a_CXXFLAGS = $(CXXFLAGS) $(CPU_FLAG)

Note that you should not include $(CXXFLAGS) here.  CXXFLAGS is always
included (so with this it will duplicated on the command line, which
might be undesired by the user).

> Now that the objects are built we need to add libcpu.a back into
> libcryptopp.la in the exact position it would have been in if I could
> have specified per-object flags. The Automake manual gives an example
> of linking a program with disjoint libraries, but not adding the
> extraneous library back to the main (primary?) library at a particular
> position.
>
> The "in the exact position" is important.

Not too familiar with C++ stuff but I would be a bit concerned that
it might not be possible at all to force a particular link order for
the objects in the static version of the library.

Nevertheless for the shared library case you can probably achieve this
using several dummy libraries.  Something like this should work OK
(totally untested):

lib_LTLIBRARIES = libfoo.la
EXTRA_LTLIBRARIES = libdummy1.la libdummy2.la libdummy3.la

libdummy1_la_SOURCES = a.cpp b.cpp

libdummy2_la_SOURCES = c.cpp d.cpp
libdummy2_la_CXXFLAGS = -mstuff

libdummy3_la_SOURCES = e.cpp f.cpp

libfoo_la_SOURCES =
libfoo_la_LIBADD = $(libdummy1_la_OBJECTS) \
   $(libdummy2_la_OBJECTS) \
   $(libdummy3_la_OBJECTS)

and then the linking order should be a, b, c, d, e, f -- with c and d
compiled using your special flags.

Cheers,
  Nick



Re: Parallel build sometimes resulting in "fatal error: config.h: No such file or directory"

2017-10-16 Thread Nick Bowler
Hi Simon,

On 10/16/17, Simon Sobisch  wrote:
[...]
> Running without `make -j` always work but using parallel builds sometime
> break with the mentioned error.
[...]
> ~~~
> gcc -O2 -pipe -finline-functions -fsigned-char -Wall -Wwrite-strings
> -Wmissing-prototypes -Wno-format-y2k -U_FORTIFY_SOURCE
> -Wl,-z,relro,-z,now,-O1  /home/simon/gnucobol/cobc/../cobc/cobc.c   -o
> ../cobc/cobc
> /home/simon/gnucobol/cobc/../cobc/cobc.c:26:10: fatal error: config.h:
> No such file or directory
>  #include "config.h"
>   ^~
> compilation terminated.

I took a quick look at your project.  The problem is likely this bit
from cobc/Makefile.am:

  COBC = $(top_builddir)/cobc/cobc$(EXEEXT)

  cobc.1: [...] $(COBC)

The problem is that this COBC macro does not match the Automake generated
target name.  When it gets pulled in as a prerequisite for cobc.1 and the
file does not already exist, the built-in GNU make rule applies, which
produces cobc from cobc.c.

This is the wrong rule, so the compilation fails.

The prerequisites in make rules typically should match the target names
exactly.  In this case, it should be:

  cobc.1: [...] cobc$(EXEEXT)

Hope that helps,
  Nick



Re: No rule to make target 'bzr.mk', needed by 'all-am'

2017-09-29 Thread Nick Bowler
On 9/29/17, Sascha Manns  wrote:
> Am Freitag, den 29.09.2017, 16:26 +0200 schrieb Sascha Manns:
>> i have a project what provides a file called "bzr.mk". This isnt
>> generated and should just installed in $(datadir)/bzrmk.
>> [...]
>> bzrmk_DATA = bzr.mk
>>
>> But while building the package i'm getting:
>> Making all in src
>> make[3]: Entering directory '/build/bzrmk-1.2.1/src'
>> make[3]: *** No rule to make target 'bzr.mk', needed by 'all-
>> am'.  Stop.
>> make[3]: Leaving directory '/build/bzrmk-1.2.1/src'
>> Makefile:464: recipe for target 'all-recursive' failed
>
> I found it out. I'll did the from a tarball, generated with make dist.
> Now i changed the src/Makefile.am, and included the bzr.mk in
> EXTRA_DIST.

That'll work.  Alternately you can use the dist_ prefix[1], e.g.,

  dist_bzrmk_DATA = brz.mk

[1] 
https://www.gnu.org/software/automake/manual/automake.html#Fine_002dgrained-Distribution-Control

Cheers,
  Nick



Re: Automake Digest, Vol 175, Issue 3

2017-09-05 Thread Nick Bowler
On 2017-09-05, Kip Warner  wrote:
[...]
> Hey Thomas. Good question. It could well be that no hackery at all is
> required with this. Here is my Makefile.am:
>
> https://github.com/cartesiantheatre/narayan-designer/blob/master/Source/Makefile.am
>
> See parser_clobbered_source_full_paths as an example. This variant
> containing the full path is used in BUILT_SOURCES, nodist_..._SOURCES,
> CLEANFILES, and as a target.
>
> The parser_clobbered_source_files_only variant containing the file
> names only is used on line 150 as a workaround for where bisonc++(1)
> emits its files.
>
> If you can see a more elegant way of solving the same problem I'm
> trying to, I'm all ears.

If your only uses of the directoryless-filenames are in rules, then
just write the names including directories in the make variables,
then strip off the directory components inside the rule.  In rules
you can use the much more powerful shell constructs.

Example:

  % cat >Makefile <<'EOF'
  FOO = a b/c d/e/f

  my_rule:
for i in $(FOO); do \
  case $$i in */*) i=`expr "$$i" : '.*/\(.*\)'`; esac; \
  printf '%s\n' "$$i"; \
done
EOF
  % make my_rule
  a
  c
  f

If you assume a reasonably-POSIXish shell, you can use something like
$${i##*/} to strip directory parts instead (I think this form will fail
on at least Solaris /bin/sh).

Cheers,
  Nick



Re: Portable $addprefix

2017-08-25 Thread Nick Bowler
Hello,

On 8/24/17, Kip Warner  wrote:
> I'd like to transform the following variable in my Makefile.am from...
>
> files_only = a.foo b.foo c.foo d.foo ...
>
> Into...
>
> files_with_path = dir/a.foo dir/b.foo dir/c.foo dir/d.foo ...

I'm not aware of any truly portable way to do this directly in make.

But your example looks like a pretty static list (i.e., this list won't
be changed by the user after the package is generated), so the portable
way is to just generate both lists in advance, at the same time you run
automake (perhaps with a perl script that postprocesses Makefile.in).

If the list depends on configure results then another possibility is
to have configure generate both lists.

Finally, while not portable to all make implementations, expansions
like this:

  $(files_only:%:dir/%)

do work in multiple implementations other than GNU make.

Cheers,
  Nick



Re: Not installing to hard-coded locations vs polkit's fixed location

2017-08-21 Thread Nick Bowler
Hi,

On 2017-08-21, Mike Fleetwood  wrote:
> I'm working on adding installation of a polkit action file into
> GParted's build and install system, however the polkit daemon only
> recongises action files installed into the single location of
> /usr/share/polkit-1/action/.

There is a section about this issue in the Automake manual[1].

> Currently the Makefile.am contains this line:
> (larger fragment of the Makefile.am below)
> polkit_actiondir = $(datadir)/polkit-1/actions
[...]
> Are there any resolutions to this?
> I could:
> 1) Leave things as they are and document it as the builders
>responsibility, that when prefix defaults to /usr/local, or anything
>other than /usr, that the polkit action file will need manually
>installing into the correct location under a unique name so as not to
>overright any distro package provided copy.

It is pretty much fine as is.  If it matters, the installer can specify
polkit_actiondir when they install your package, for example:

  % make polkit_actiondir=/the/correct/location install

Just include a note about it in your README.

Things get a bit more complicated if you want the default install
location to be something probed at configure time.  This usually
involves some heuristics to get a reasonable user experience.

> 2) Set polkit_actiondir to /usr/share/polkit-1/action but that is
>against automake guidance and breaks 'make distcheck'.

This is generally a bad idea, as installing to hardcoded locations breaks
many things.  For example, it will prevent successful installation as an
unpriviliged user (unless the user knows to override polkit_actiondir).

> Are there any other solutions which are reasonable?

Another option (which might not be acceptable for you) is to do
nothing: i.e., don't install any files into the external location
by default.  The user can place files manually into the correct
locations as required.

[1] 
https://www.gnu.org/software/automake/manual/automake.html#Hard_002dCoded-Install-Paths

Cheers,
  Nick



Re: Integrating flexc++(1) and bisonc++(1) into Makefile.am

2017-07-12 Thread Nick Bowler
Hello,

On 7/12/17, Kip Warner  wrote:
> My challenge is replicating their functionality for flexc++(1) and
> bisonc++(1) in the absense of macros to make their usage easier in
> Automake
[...]
> In trying to integrate the two tools into my build environment, I've
> attempted the following in Makefile.am:
[...]
> BUILT_SOURCES = \
[...]
>   Source/ParserBase.h \
>   Source/Parser.h \
>   Source/Parser.ih \
>   Source/Parser.cpp
>
> myprogram_SOURCES = \
[...]
>   Source/Parser.cpp
[...]
> # Generate parser source from Backus-Naur grammar rules via bisonc++...
> Source/ParserBase.h:
> Source/Parser.h:
> Source/Parser.ih:
> Source/Parser.cpp: Source/Parser.ypp
>   $(BISONCPP) --target-directory=$(top_builddir)/Source $<
>
> FLEXCPP and BISONCPP are obtained via AC_PATH_PROG in configure.ac.
>
> This all works ok, but I suspect this is not an elegant solution and
> there are some very good suggestions from this mailing list.

There aren't really any "elegant" solutions.  Make handles this kind of
tool quite badly.  It is possible to get things to work but it is always
a tradeoff between flexibility of your build system and simplicity of
your rules.

If you are happy with this method then it is totally fine.  Do make
sure parallel builds work by testing them routinely (both clean and
incrementally) -- I think listing everything in BUILT_SOURCES like you
do probably "resolves" any parallelism problems here, (by reducing
opportunities for parallelism).

The Automake manual has section on writing portable make rules for tools
that produce multiple outputs[1], with a discussion of various approaches
and their limitations.  I generally prefer approaches using a dedicated
witness file.

Finally, consider whether you want to distribute the generated parser
sources.  That way your users don't need these tools installed just to
build your package.

[1] https://www.gnu.org/software/automake/manual/automake.html#Multiple-Outputs

Cheers,
  Nick



Re: Makefile.am and patch orthogonality

2017-06-21 Thread Nick Bowler
On 2017-06-21, Anton Shepelev  wrote:
> Contextual diff-files a very good means of collaborative development
> whenever they are used with source code, but I have a problem with
> .am files.  If two patches should add new source files to the same
> directory, they will also have to modify accordingly the Makefile.am
> that lists the sources for that location, and that is *very* likely
> to cause a conflict.

True, but the resolution is normally trivial.

> What should you recommend to prevent Makefile.am from becoming the
> bottleneck of collaborative development and patch isolation?

Since patch/diff work line-by-line, a common way to reduce merge
conflicts in this kind of a list is to have one filename per line.
A convention to keep the list sorted can help as then new files
aren't always added to the end.

You can also setup a custom merge handler in git.

Cheers,
  Nick



Re: distcheck does 'chmod a-w' - leads to mkdir "permission denied"

2017-03-03 Thread Nick Bowler
Hello,

On 3/3/17, Paul Jakma  wrote:
> Well, it's due to the distributed 'doc/quagga.info' being slightly older
> than the (built by automake) doc/defines.texi.
>
> Both are made by the 'make dist' and put in the dist tarball. Then
> distcheck unpacks that to a subdir, makes it RO, and creates another
> subdir for an out-of-tree build and builds in that. automake then makes
> a new doc/defines.texi, but the dist source ../../../doc/quagga.info of
> course is now older, so...

So the problem seems to be that defines.texi is *also* generated, but by
your normal build process?  configure outputs it?  If the docs depend on
configure output then there is no point in distributing generated docs
at all, since every user will have to regenerate them anyway.

[...]
> I guess we could find a way to /not/ distribute the built quagga.info,
> but that would then require people to have texinfo to build the project.

Well as you can see, texinfo is already required since the rule to build
documentation is being run in any case.

> Though, there's a risk whatever it is would still try build it as
> ../../../doc/quagga.info rather than doc/quagga.info.
>
> I'm using fairly standard Automake macros to do this:
>
> # Built from defines.texi.in
> BUILT_SOURCES = defines.texi
>
> info_TEXINFOS = quagga.texi
>
> quagga_TEXINFOS =
>
> Looking at the automake documentation I read:
>
> "It is worth noting that, contrary to what happens with the other
>   formats, the generated ‘.info’ files are by default placed in ‘srcdir’
>   rather than in the ‘builddir’."
>
> Could that be the source of the problem?

Well yes, but probably not by itself...

> It goes on:
>
>   "This can be changed with the ‘info-in-builddir’ option."
>
> And, indeed, when I add "AUTOMAKE_OPTIONS=info-in-builddir" to
> doc/Makefile.am the above problem disappears!

The builddir is writable so generating docs there will not fail (well,
unless the user doesn't have makeinfo installed).

Personally I prefer to generate files into builddir and distribute from
there (so I like this option), but are other tradeoffs (this way can have
different subtle problems, usually related to unintentionally having
files of the same name in srcdir and builddir).

> This seems to be sub-optimal in automake? Any auto-generation of files
> by automake used for the info documentation results in a broken
> distcheck?

I would expect it to work out of the box provided that your texi files
are not modified by the build system.

Cheers,
  Nick



Re: distcheck does 'chmod a-w' - leads to mkdir "permission denied"

2017-03-03 Thread Nick Bowler
Hi,

On 3/3/17, Paul Jakma  wrote:
> My make distcheck is broken and I can't figure out how to fix it or
> where the problem lies.
[...]
> chmod -R a-w quagga-1.2.0
> chmod u+w quagga-1.2.0
> mkdir quagga-1.2.0/_build quagga-1.2.0/_build/sub quagga-1.2.0/_inst
> chmod a-w quagga-1.2.0
> test -d quagga-1.2.0/_build || exit 0; \
[...]
> Making all in doc
> make[3]: Entering directory
> '/home/paul/code/quagga/quagga-1.2.0/_build/sub/doc'
> make  all-am
> make[4]: Entering directory
> '/home/paul/code/quagga/quagga-1.2.0/_build/sub/doc'
>MAKEINFO ../../../doc/quagga.info
> mkdir: cannot create directory ‘.am18743’: Permission denied

One of the things 'make distcheck' tests it that it can run builds from a
read-only source tree.  That is, remove all write permissions from the
unpacked tarball, then perform build actions (at the same time, it is
doing a VPATH build).

This includes running 'make dist' from such a source tree.

It looks to me like you have a problem where some build rule is trying
to write to srcdir (this is a common way to write rules when distributing
generated files) This probably means you have a timestamp problem in
your distribution tarball (e.g., some distributed files are older than
their source files).

Expected behaviour on a freshly unpacked tarball is that all such
generated files are up to date, and therefore no build rules will
attempt to update them.

Cheers,
  Nick



Re: AC_SUBST'ing foodir correctly?

2016-05-24 Thread Nick Bowler
On 2016-05-24, Wouter Verhelst  wrote:
> I'm adding a systemd unit to my package. To that end, I'm checking if
> there is a pkg-config .pc file for systemd which sets a variable
> "systemdsystemunitdir", and am trying to install the systemd unit in
> that location.
[snip configure script computes systemdunitdir from pkg-config]
> and then in my Makefile.am:
>
> if SYSTEMD
> systemdunit_DATA = nbd@.service
> endif
>
> (if you need the full files, they're in the git repository at
> git.debian.org/users/wouter/nbd.git)
>
> However, now my "make distcheck" fails, because the "make install"
> target disregards DESTDIR and tries to install files in the actual
> systemd unit directory, rather than the staging one. Clearly this means
> I'm doing something wrong, but I'm not sure what the proper way for
> doing this would be.

I suspect it is not the DESTDIR check which is failing -- the problem
is that your installation directory ignores prefix.  The package must
install all files into ${prefix}-relative locations by default.

Basically, distcheck is telling you that unprivileged installs will
fail because they try to install to /usr somewhere.

Here are some basic options, in increasing order of complexity:

  - Don't install the unit files (user can copy them manually).

  - Don't autodetect at all: just default to some ${prefix}-relative
installation directory and allow the user to change it manually.
E.g., make systemdunitdir=/path/to/wherever.

  - Munge the autodetected path into something relative to ${prefix}.

See "Installing to Hard-Coded Locations"[1] in the Automake manual for
some more information on this topic.

[1] 
https://www.gnu.org/software/automake/manual/automake.html#Hard_002dCoded-Install-Paths

Cheers,
  Nick



Re: Are there shell equivalents to PREFIX etc.?

2016-03-28 Thread Nick Bowler
Hi Andy,

(Note that your mailer seems to have completely mangled all whitespace,
particularly the newlines, in your make snippet.  This makes it very
hard to read.  I have tried to manually correct it in the quoted text).

On 2016-03-28, Andy Falanga (afalanga)  wrote:
> My question is, I hope, quite simple.  I have a case where I made some
> distribution, installation and uninstall hooks in my Makefile.am:
>
>   EXTRA_DIST = setupenv.sh bootstrap tests
>
>   dist-hook:
>   rm -rf $$(find $(distdir)/tests -name \*.swp -o -name \*.pyc)
>
>   install-exec-hook:
>   mkdir -p $(prefix)/unit_tests/unittest2
>   for f in tests/*.py; do \
> cp $$f $(prefix)/unit_tests; \
>   done
>   for f in tests/unittest2/*.py; do \
> cp $$f $(prefix)/unit_tests/unittest2; \
>   done
>
>uninstall-hook:
>   rm -r $(prefix)/unit_tests
>
> Ordinarily, this works just fine.  However, when building the RPM for
> this software, the prefix is set to an alternative location in /opt.  I
> would have thought that the rules I've written would have worked with
> the RPM build system.  This isn't quite the case though.  When executing
> my install hook, the mkdir command fails because the common user doesn't
> have permissions to make directories in /opt/. .

I suspect your immediate problem is simply that your rules are not
respecting ${DESTDIR}.  See the Automake manual, section 12.4 "Staged
Installs"[1].  The RPM packager is almost certainly using this feature,
so you need to support it.  "make distcheck" tries to check that your
package properly supports this function, and probably would have caught
this issue.

Basically, all filenames that point to installed file locations must
start with ${DESTDIR}, for example:

  install-exec-hook:
mkdir -p ${DESTDIR}${pkgdatadir}/foo
cp ${srcdir}/file1 ${DESTDIR}${pkgdatadir}/foo

Fixing this is probably enough to make the RPM packager happy.

Note that your rule should most likely also specify ${srcdir} on the
source filename (unless these are generated files), otherwise VPATH
installations may fail (distcheck should catch this too).

As an aside, packages generally should not install files directly in
${prefix}; consider defining a separate directory variable, such as:

  unittestdir = ${prefix}/unit_tests

Finally, these files look to me like they really belong in a
package-specific installation directory by default, such as:

  unittestdir = ${pkglibexecdir}/unit_tests

[1] https://gnu.org/software/automake/manual/automake.html#Staged-Installs

Hope that helps,
  Nick



Re: excluding intermediate flex test suite files from distribution?

2015-11-13 Thread Nick Bowler
On 11/12/15, Will Estes  wrote:
> On Thursday, 12 November 2015,  9:41 pm +, Gavin Smith
>  wrote:
>> On 12 November 2015 at 19:19, Will Estes  wrote:
>> > and Makefile.am:
>> >
>> > check_PROGRAMS = test
>> >
>> > test_SOURCES = test.l
>> > nodist_test_SOURCES = test.c
[...]
>> I'd guess that nodit_test_SOURCES = test.c is wrong, because of this:
>>
>> "You should never explicitly mention the intermediate (C or C++) file
>> in any SOURCES variable; only list the source file."
>>
>> http://www.gnu.org/software/automake/manual/html_node/Yacc-and-Lex.html
>>
>> I changed Makefile.am to
>>
>> check_PROGRAMS = test
>>
>> nodist_test_SOURCES = test.l
>> #nodist_test_SOURCES = test.c
[...]
>> I guess that's not what you want, because test.l isn't distributed there.
>>
>> Following Nick's suggestion to use dist-hook, the following appeared
>> to give good results:
>>
>> check_PROGRAMS = test
>>
>> test_SOURCES = test.l
>>
>> dist-hook:
>> rm -f ${distdir}/test.c
>>
>> With that, test.l is distributed but test.c isn't.
>>
>
> Thanks. That does what I'm looking for and I can scale that up to the entire
> test suite with a bit of effort.

This will probably work fine, but this has the potential problem that
test.c will be built every time you run 'make dist', just to be deleted
immediately.  This may or may not be a concern for you, but the likely
consequence is that lex will be required to run 'make dist', even though
it would otherwise not be required.

My original suggestion was to try something like this (not tested):

  EXTRA_DIST =

  nodist_test_SOURCES = test.l
  EXTRA_DIST += $(nodist_test_SOURCES)

Cheers,
  Nick



Re: excluding intermediate flex test suite files from distribution?

2015-11-12 Thread Nick Bowler
Hello,

On 2015-11-12, Will Estes  wrote:
> The flex program includes a test suite to enable testing of the version of
> flex built in the flex tree. From a distributed tar ball (made with "make
> dist"), it is possible to build flex without needing to have flex already
> installed because automake includes the intermediate files in the
> distribution.
>
> However, the test suite should not include the intermediate .c (and c++)
> files because the point of the test suite is to test the flex binary built
> in the tree, so that binary should build the intermediate .c (and c++)
> files.
>
> How do I do this? I've been playing with it and have not been able to come
> up with a solution. At most, I'm able to not include some generated header
> files, which then really confuses the test suite in the generated tar ball.
>
> What other information / examples can I provide to make this clear?

If you can provide a (short!) example Makefile.am, members of this list
may be able to suggest specific changes.

The Automake manual has a chapter on how the distribution is built[1].

A possible solution (not sure if it will work for you, you may have to
experiment a bit):

  - Prevent Automake from distributing any of the testsuite-related
flex source files (should be achievable using nodist_).
  - Manually include only the files you actually want, e.g., by using
EXTRA_DIST.

Using a dist-hook may be helpful if EXTRA_DIST is not expressive enough.

[1] https://gnu.org/software/automake/manual/automake.html#Dist

Cheers,
  Nick



Re: Why hasn't this ARFLAGS patch not been merged yet?

2015-11-03 Thread Nick Bowler
On 10/31/15, Kim Walisch  wrote:
> Hi,
>
> I have two open source projects (primesieve and primecount) which use
> the GNU build system. Both currently print a warning during make
> (tested on Ubuntu 15.10 and Fedora 22):
>
> ar: `u' modifier ignored since `D' is the default (see `U')
>
> For primesieve I have opened an issue on GitHub
> https://github.com/kimwalisch/primesieve/issues/16 with more details.
>
> A patch has been proposed by Pavel Raiskup to fix this issue
> http://www.mail-archive.com/automake-patches@gnu.org/msg07705.html
> on the 2nd June 2015 but as far as I can see the patch has not been
> merged into the master branch yet.
>
> I think that many (maybe even most?!) projects using the GNU Build
> System are affected by this issue. So my question is why hasn't this
> patch not yet been accepted?

The issue is just cosmetic, right?  Everything works fine, just some
noise is printed during the build.  The patch just suppresses the
warning message by removing use of the 'u' option.

I believe the Automake project is still looking for a new maintainer[1],
which is probably why the fix has not yet been merged.

[1] https://lists.gnu.org/archive/html/automake/2014-11/msg5.html

Cheers,
  Nick



Re: Why hasn't this ARFLAGS patch not been merged yet?

2015-11-03 Thread Nick Bowler
On 11/3/15, Michael Felt  wrote:
> I suppose in a GNU only world, this would be okay. However, my man page for
> ar still says:
>
>-u
> Copies only files that have been changed since they were last
> copied (see the -r flag discussed previously).
>
> while the option -D is not found.
[...]
> so, I think adding -D as a default will break "all" for me.

The patch actually just removes 'u' from the default; it does not add
any nonstandard options.  The issue is that some GNU systems are
shipping versions of 'ar' which enable the (GNU-specific) 'D' option
by default, which causes the 'u' option to print a warning and have no
other effect.



Re: why forbidding "include" a sub-makefile.am with absolute path

2015-06-29 Thread Nick Bowler
On 2015-06-29 22:33 +0800, 赵峰(远猷) wrote:
> I find the following code in automake.
> >my $PATH_PATTERN = '(\w|[+/.-])+';
> ># This will pass through anything not of the prescribed form.
> >my $INCLUDE_PATTERN = ('^include\s+'
> >                       . '((\$\(top_srcdir\)/' . $PATH_PATTERN . ')'
> >                       . '|(\$\(srcdir\)/' . $PATH_PATTERN . ')'
> >                       . '|([^/\$]' . $PATH_PATTERN . '))\s*(#.*)?' . "\$");
> 
> but why need forbidding "include" sub-makefile.ams with absolute path
> by the last line?  In my Makefile.am, i need to include another
> sub-makefile from the path in a variable(ANOTHER_PJ_DIR) declared with
> AC_SUBET> Makefile.am
> > include @another_pj_...@common.mk
> the ANOTHER_PJ_DIR contains an absolute path of another project, but
> the include does't parsed by automake.How can i do it?thx!
> miles.zhaof

Configure substitutions (AC_SUBST) cannot work because Automake includes
are expanded only once, when Automake is run.

The generated Makefile.in file (which gets processed by configure) does
not contain the include commands at all.

Regards,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How are TAP programs initiated?

2015-05-29 Thread Nick Bowler
On 2015-05-29 08:24 -0700, Arthur Schwarz wrote:
> I've looked through tap-driver.sh and Makefile and I have missed how a
> test script is executed and information passed to tap-driver.sh. I do
> know that the program passed to tap-driver.sh seems to be ignored.

How do you know that?  The program name is clearly not ignored;
in fact it is mandatory.

> The ${AWK} program then reads from a 'getline' which I guess reads
> from a piped input.

Probably, but you don't have to guess.  The awk language is
defined in the POSIX standard, and the GNU awk manual is excellent.
See §4.9 "Explicit Input with Getline"[1]

  4.9.1 Using getline with No Arguments

  The getline command can be used without arguments to read input from
  the current input file. All it does in this case is read the next
  input record and split it up into fields.

> but I can't find in Makefile where the script is executed. Is the
> stuff preceding tap-driver execution executing the test scipt?

I tried a quick test, and looked at the generated Makefile.  There is
only one command in my Makefile that executes tap-driver, and it looks
like this:

  $(am__check_pre) $(TAP_LOG_DRIVER) --test-name "$$f" \
  --log-file $$b.log --trs-file $$b.trs \
  $(am__common_driver_flags) $(AM_TAP_LOG_DRIVER_FLAGS) $(TAP_LOG_DRIVER_FLAGS) 
-- $(TAP_LOG_COMPILE) \
  "$$tst" $(AM_TESTS_FD_REDIRECT)

The program name and its arguments are everything after the "--".

> Is TAP processing:
> 
> ./script | tap-driver.sh  # or
> tap-driver.sh -- ./script # passed as an argument

Neither usage is correct.  Did you try them?  I imagine you did not
because the first command causes tap-driver.sh to print its basic usage
instructions:

  Usage:
tap-driver.sh --test-name=NAME --log-file=PATH --trs-file=PATH
  [--expect-failure={yes|no}] [--color-tests={yes|no}]
  [--enable-hard-errors={yes|no}] [--ignore-exit]
  [--diagnostic-string=STRING] [--merge|--no-merge]
  [--comments|--no-comments] [--] TEST-COMMAND
  The `--test-name', `--log-file' and `--trs-file' options are mandatory.

[1] https://gnu.org/software/gawk/manual/gawk.html#Getline

Regards,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How can I pass $@ to Makefile?

2015-05-28 Thread Nick Bowler
On 2015-05-28 10:23 -0700, Arthur Schwarz wrote:
> I'm have a little program in my makefile.am:
> 
> test3.abc:
>   echo '#!/bin/bash'  > test3.abc
>   echo "echo test3.abc $$# ' [' $$@ ']'>> test3.log" >> test3.abc
>   echo "echo I am a test script>> test3.log" >> test3.abc
> 
> Which works fine except the $$#. What I'm trying to do is to have:
> 
> test3.abc
>echo test3.abc $# ' [' $@ ']
> 
> But I don't know how to do the escapes properly.

Since make prints out all the commands it runs by default, quoting
issues are normally straightforward to debug as you can just look at
the commands it prints to see what's wrong with them.  You can go even
further and use a command like:

  make SHELL='sh -x'

to additionally have the shell print the commands it runs (after all
expansions).

So let's try this with your make rule.  The lines starting with a
"+" character are the actual commands being executed by the shell:

  % make SHELL='sh -x' test3.abc
  echo '#!/bin/bash'  > test3.abc
  + echo '#!/bin/bash'
  echo "echo test3.abc $# ' [' $@ ']'>> test3.log" >> test3.abc
  + echo 'echo test3.abc 0 '\'' ['\''  '\'']'\''>> test3.log'
  echo "echo I am a test script>> test3.log" >> test3.abc
  + echo 'echo I am a test script>> test3.log'

Do you see the problem now?

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: What is a 'program' in Section 15.3.3.1?

2015-05-27 Thread Nick Bowler
On 2015-05-27 13:55 -0700, Arthur Schwarz wrote:
> In looking at tap-driver.sh there doesn't appear to be a place where a
> 'program' is accepted on the input command line. It appears that after all
> options are read if the input command line '$#' is not zero then an error is
> declared. So, is the TAP interface different from other Custom Test Driver
> interfaces?

I am guessing that you are referring to this line in tap-driver.sh:

  test $# -gt 0 || usage_error "missing test command"

The error is only printed if $# is *equal* to zero (i.e., additional
arguments including the program name are mandatory).

Regards,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Setting environment variable for make dist

2015-05-19 Thread Nick Bowler
Hello,

On 2015-05-18 16:10 +0800, Bas Vodde wrote:
> On 14 May 2015, at 9:24 pm, Nick Bowler  wrote:
> > But I think there is a solution: we can (ab)use the fact that 'make dist'
> > internally performs a recursive make invocation.  This gives us the chance
> > to add things to the make command line, using AM_MAKEFLAGS.  So putting
> > 
> >  AM_MAKEFLAGS = COPYFILE_DISABLE='$(COPYFILE_DISABLE)'
> > 
> > into Makefile.am should work I think (not tested), because variables
> > defined on the make command line *are* exported into the environment.
>
> I thought about it a bit and guess that this would *always* define the
> COPYFILE_DISABLE=1. That is, not just in “make dist” but also in other
> targets, correct?

The flags will be passed on recursive make invocations, so it will be
defined whenever that happens.

This includes targets like 'make all', 'make clean' and 'make dist', but
does not normally include targets like 'make src/foo.o'

> I think that isn’t what I want as I don’t want to go too much against
> the filesystem defaults, to avoid potential troubles :)

Does the COPYFILE_DISABLE=1 flag affect more than just the 'tar'
behaviour?  Then yeah, there may be potential troubles.  It would be
good to characterize the effects of this flag.

> Shouldn’t this be solved inside autotools itself? I guess it is
> something valid for every Mac user who creates a distribution on Mac….
> and I guess it is exactly these kind of OS differences autotools is
> trying to resolve?

This does sound like a portability problem that Automake could solve.
There is a related bug report[1].

With recent-ish versions of Automake you could try fitzing with the TAR
environment variable instead, together with a suitable configure test.
This might be trickier to get right, however.

[1] https://debbugs.gnu.org/cgi/bugreport.cgi?bug=9822

Regards,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How do you set VERBOSE for parallel testin

2015-05-14 Thread Nick Bowler
On 2015-05-13 13:18 -0700, Arthur Schwarz wrote:
> > There are 3 "normal" ways a make variable can be set (this is not the
> > complete picture but it will do):
> > 
> >  (1) In the environment (FOO=bar make)
> >  (2) In the Makefile (FOO=bar in the Makefile)
> >  (3) on the make command line (make FOO=bar)
> 
> Before I go out on a limb and say something I'll regret, if I understand
> this correctly any user variable (not developer variable) can always be
> defined via (1) and (3). Is this correct?

Not necessarily, because variables like prefix are going to have an
assignment in the Makefile.

> Can I do this with lists, e.g.: make TEST="test1 test2..."
> 
> And for variables which have script commands associated with them?
> For example: make AM_TESTS_ENVIRONMENT=". $(srcdir)/tests-env.sh; \
>if test -d /usr/xpg4/bin; then \
>   PATH=/usr/xpg4/bin:$$PATH; export PATH; \
>fi;" 
> 
> If I have several variables do I write: make FOO=bar BAZ=snafu ...?
> 
> Suppose the variable has a generated Makefile component (TEST_LOGS) and a
> user can change it. What changes, if anything? In order to modify such a
> variable is the user required to use form (3)?
> 
> Does anything change if the user variable name is different from the
> Makefile.am variable name, e.g. ext_LOG_FLAGS and AM_ext_LOG_FLAGS?
> 
> Does anything change if the user variable name is the same as the
> Makefile.am variable name, e.g. TESTS?

The answer to all of these questions is "Yes, the user running 'make'
can override any make variable whatsoever and set it to whatever he or
she likes."  Whether this is a reasonable thing for the user to do is
another story.  Overriding a variable that was not designed to be
overridden by the makefile author is probably going to break the
build rather badly.

Make provides quite a lot of options that drastically alter its
behaviour.  Actually using these features will often break things.

But none of this really has much to do with Automake, except
tangentially because Automake is a tool to help generate portable
make programs.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Setting environment variable for make dist

2015-05-14 Thread Nick Bowler
Hello,

On 2015-05-10 12:57 +, Bas Vodde wrote:
> I'm trying to change my projects configure.ac so that it sets the
> COPYFILE_DISABLE=1 environment variable to influence the make dist
> target.
> 
> Background: The COPYFILE_DISABLE=1 avoids having a second top-level
> directory in your package when packaging on MacOSX.
> 
> So far, my attempt was to set the variable in the configure.ac file
> and use AC_SUBST on it, but that didn't seem to work.

Right.  AC_SUBST([COPYFILE_DISABLE]) ordinarily does two things:

  - config.status will substitute @COPYFILE_DISABLE@ in output files
  - it causes Automake to put a line like this:

  COPYFILE_DISABLE = @COPYFILE_DISABLE@

into the Makefile.in files it generates.

Unfortunately this alone is not sufficient, because make variables are
not generally exported into the environment (which is what you actually
want to happen).

But I think there is a solution: we can (ab)use the fact that 'make dist'
internally performs a recursive make invocation.  This gives us the chance
to add things to the make command line, using AM_MAKEFLAGS.  So putting

  AM_MAKEFLAGS = COPYFILE_DISABLE='$(COPYFILE_DISABLE)'

into Makefile.am should work I think (not tested), because variables
defined on the make command line *are* exported into the environment.
You will still need the AC_SUBST to define COPYFILE_DISABLE in the
first place.

Regards,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How do you set VERBOSE for parallel testin

2015-05-13 Thread Nick Bowler
On 2015-05-13 08:20 -0700, Arthur Schwarz wrote:
> > Usually I run my tests with something like this:
> 
> > make check -j8 VERBOSE=1
> 
> Thanks Peter.
> 
> My question is is this the only way to use VERBOSE? The Automake Manual
> seems to say that VERBOSE is a variable, not a make argument. And, as a
> variable, if the user (you) can change it's value then the appropriate way
> to do it is either:
> env VERBOSE=1 make -e check
> or
> VERBOSE=1; export VERBOSE; make -e check

Look at the Makefile.in which is generated by Automake.  VERBOSE is an
environment variable (at least with the parallel test harness), and the
test condition is that VERBOSE is set to a non-empty value.

The interaction between make variables and environment variables is
complicated.

There are 3 "normal" ways a make variable can be set (this is not the
complete picture but it will do):

 (1) In the environment (FOO=bar make)
 (2) In the Makefile (FOO=bar in the Makefile)
 (3) on the make command line (make FOO=bar)

The priority is in that order: if a variable is set on the command line
it will override any definition in the Makefile, which in turn will
override any definition in the environment.  You can use the -e option
to make to switch the order of (1) and (2), which will probably break a
lot of things.

Now, before make runs a command, it adds the make variables from (3) to
the environment.  Some, but not all[1], make implementations will further
update existing environment variables [those from (1)], with assigned
values from (2).

So the takeaway is this:

 - Never set VERBOSE to any value in Makefile.am.  Such an assignment
   will be useless at best.

 - Use make VERBOSE=1 check to enable verbose mode when running tests.

[1] Results from a quick test: GNU make and Heirloom make update the
environment, while dmake, NetBSD make, and FreeBSD make do not.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Newbie: make dist succeeds, make distcheck fails to copy e.g. '../config/compile'

2015-04-17 Thread Nick Bowler
On 2015-04-17 00:13 +0200, Roelof Berg wrote:
[...]
> 'make dist' runs very well and its output can be built on a clean 
> system, all tests pass, the tarball seems to be perfectly ok. 'make 
> distcheck', however, complains about missing files:
>
> prompt$ make distcheck
> [...]
>dist-hook
> make[3]: Entering directory 
> '/home/roelof/devel/limereg/limereg-1.3.1/_build'
> for file in ../config/compile ../config/config.guess 
> ../config/config.sub ../config/depcomp ../config/install-sh 
> ../config/ltmain.sh ../config/missing; do \
>cp $file limereg-1.3.1/$file; \
> done
[snip errors]
> Makefile:925: recipe for target 'dist-hook' failed
> [...]
> 
> A folder named config would be found if there was no '..' in the path, I 
> wonder where the '..' comes from. And to be honest, I have no idea what 
> is the purpose of this dist-hook build step at all. Automake together 
> with libtool is so huge for a newbie ...

A dist-hook is a Makefile rule, written by you.  There is probably
something in your Makefile.am like:

  dist-hook:
... your commands ...

It is likely that your dist-hook rule does not properly support VPATH
builds, when the source and output directories are different.  When you
run "make distcheck", one of the things it tests about your package is
that "make dist" works in this configuration.  It has detected and
reported an error in your package.

Usually these sort of errors are the result of mixing up $(srcdir) and
$(builddir) in the make rules.

To do your own VPATH builds, outside of "make distcheck", first do a
"make distclean" to delete all the build products, then cd into an empty
directory and run configure from there, e.g.,:

  % make distclean
  % mkdir build
  % cd build
  % ../configure
  % make dist

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Question on AM_TAP_AWK

2015-04-06 Thread Nick Bowler
On 2015-04-06 06:44 -0700, Arthur Schwarz wrote:
> Automake manual pg. 120

When referencing the manual it is best to provide section numbers,
and also quote the section title.  Ideally, further include a brief
quotation of the text you are discussing.

Page numbers:

  (a) don't exist in non-print formats, and
  (b) change over time.

Section headings suffer from (b) too but to a much lesser degree.

This is important not only so that we can figure out what part of the
manual is being discussed today, but also so that 10+ years from now,
someone searching can find this question can understand the context.

> Does anyone know where AM_TAP_AWK is defined and what it does?

I assume we are talking about §15.4.2 "Use TAP with the Automake test
harness"[1], specifically this line in the example:

  TEST_LOG_DRIVER = env AM_TAP_AWK='$(AWK)' $(SHELL) \
$(top_srcdir)/build-aux/tap-driver.sh

AM_TAP_AWK is defined right there, in the example code.  Its purpose is
to tell the tap-driver.sh program which AWK program to use.

[1] 
https://gnu.org/software/automake/manual/automake.html#Use-TAP-with-the-Automake-test-harness

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How to start serial tests

2015-03-27 Thread Nick Bowler
On 2015-03-27 12:57 -0700, Arthur Schwarz wrote:
> I'm struggling with learning how automake tests run and linking into the
> automake generated test harness(es). I just tried to generate the serial
> test harness and came across this little oddity:
> 
> Manual: GNU Automake v1.14.1, 6 Nov 2013
> 
> Chp. 15.2.2 Older (and discouraged) serial test harness (pg 112)
> 
>   "The serial test harness is enabled by the Automake option
> serial-tests."
> 
> Neither: automake --serial-tests
> nor  automake serial-tests
> work.

Automake options can be set in Makefile.am, e.g.,

  AUTOMAKE_OPTIONS = serial-tests

It's also possible to set them as arguments to AM_INIT_AUTOMAKE
in configure.ac.

I suppose the manual would benefit from a cross-reference there
because these options are documented[1] in a later chapter...

[1] https://gnu.org/software/automake/manual/automake.html#Options

Hope that helps,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: GNU libtool-2.4.6 released [stable]

2015-03-25 Thread Nick Bowler
[Adding Automake]

On 2015-03-23 16:00 +, Gary V. Vaughan wrote:
> > On Mar 23, 2015, at 2:42 PM, Bob Friesenhahn  
> > wrote:
> > > On Mon, 23 Mar 2015, Christian Rössel wrote:
[...]
> > > I discovered some files in
> > > http://ftpmirror.gnu.org/libtool/libtool-2.4.6.tar.gz that IMO
> > > don't belong there. The filenames start with "._" (just do a 'find
> > > . -name "._*"') and seem to contain dropbox meta data.
[...]
> > The 'file' command describes these as "AppleDouble encoded Macintosh file".
> > 
> > It does not seem possible that these files were listed for inclusion
> > in the release so they must be an artifact of the 'tar' program
> > used.
[...]
> Most likely, Apple's tar is passing along file system metadata for the
> destination machine :-(
> 
> While I won't be rolling any future releases, it definitely seems
> worth noting in the README-release notes that before uploading, to a)
> use GNU tar b) check that there are no weird hidden files in the
> tarball!

Is this a bug in Automake then?  Presumably it should either be
generating good tarballs or failing hard.  Maybe we could at
least augment distcheck to test for these artifacts and reject
the package in that case.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: How are env variables passed to the compilation process

2015-03-10 Thread Nick Bowler
On 2015-03-10 22:04 +, Andy Falanga (afalanga) wrote:
[...]
> I go through the list of output made by the build process and discover
> that the build process didn't do what seems to be the "standard"
> options, "-O2 -g".   Neither of these were used.  So, I thought I'd
> add them.  I did this:
> 
> make clean
> ./configure CFLAGS="-ggdb -O0" CXXFLAGS="-ggdb -O2"
> make
> 
> Much to my surprise, the "-ggdb -O0" didn't appear then either.  Did I
> miss something?  Isn't this how they are set?

Normally this is fine.  If it is not working, then probably your
configure and/or Makefile are ignoring the user-set values.  For
example, code like this...

> CPPFLAGS=" -D__linux__ -DADT_TRACE_ENABLE=0"
> 
> if [[ $debug = "true" ]]; then
> CFLAGS=" -O0 -ggdb"
> CXXFLAGS=" -std=c++0x -O0 -ggdb"
> else
> CFLAGS=" -O2 -g"
> CXXFLAGS="-std=c++0x -O2 -g"
> fi

...will override any user assignment to CPPFLAGS, CFLAGS and CXXFLAGS.
It is also basically guaranteed to fail on any compiler that is not
compatible with GCC.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Rarely rebuilt files

2014-11-12 Thread Nick Bowler
On 2014-11-12 21:58 +0200, fr33domlover wrote:
> On 2014-11-12
> Nick Bowler  wrote:
> > What is probably happening is that in VPATH builds from your tarball,
> > your documentation is being rebuilt even though it was distributed.
> > This is leaving files behind in your build directory, which distcheck
> > is then complaining about.
> > 
> > Hope that helps,
> 
> Maybe this is a problem too, but there's something before that - see the "make
> distclean" part above. I put the HTML files to be removed by "make
> maintainer-clean", which means that "make distclean" is *not supposed to 
> remove
> them* - anyway, this is what I intend. So even without the error you suggest,
> it should complain about files left in the builddir.
> 
> Am I right?

I think there may be some confusion about what distclean is supposed to
do.  It is supposed to delete all generated files that were not part of
the original distribution.  MAINTAINERCLEANFILES is a red herring.

In a VPATH build, there is a strict separation between the distributed
files (in srcdir), and the build outputs (in builddir).  By definition,
any file that shows up in builddir in this case was not part of the
distribution (as it was created after unpacking the tarball), so
distclean must delete it.

Distcheck is complaining about this apparent discrepancy.  The usual
cause of these errors is when distributed files get erroneously rebuilt.

Since distributed files being rebuilt essentially defeats the whole
point of distributing them in the first place, this suggests a bug in
the build process.  Perhaps the distribution timestamps are not correct.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: ${OBJEXT} in implicit rule

2014-11-12 Thread Nick Bowler
On 2014-11-12 16:58 +0100, Jan Engelhardt wrote:
> Using automake-1.13.4, when using the following Makefile.am fragment,
> 
> ---8<---
> bin_PROGRAMS = foo
> foo_SOURCES = foo.c bar.k
> .k.${OBJEXT}:
> gcc -x c -c $< -o $@
> --->8---
> 
> I observe that bar.o is not built and not linked into foo.

Indeed, the use of custom file extensions in _SOURCES seems to be
completely borken.  Literally the only case that appears to work
correctly is when you use a suffix rule and its definition is
precisely of the form:

  .k.$(OBJEXT):
...

(where .k can be any custom suffix).

I didn't even know this was a feature at all, but sure enough it's
documented[1].  The astute may note that the examples in the manual have
suffix rules which look a bit different from the one above...

I suggest ignoring this functionality entirely, because the sane way to
add custom compiler rules is to use _LDADD or _LIBADD.  For example:

  bin_PROGRAMS = foo
  foo_SOURCES = foo.c
  foo_LDADD = bar.${OBJEXT}

  .k.${OBJEXT}:
gcc -x c -c $< -o $@

Everything in _LDADD is simply appended verbatim to the linker command
line.  Automake adds things that look like filenames automatically to
the dependencies of the binary, and everything will work correctly in
most cases.

You can also set foo_DEPENDENCIES and/or EXTRA_foo_DEPENDENCIES
manually in more complicated cases.

[1] §18.2 "Handling new file extensions"
https://gnu.org/software/automake/manual/automake.html#Suffixes

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Rarely rebuilt files

2014-11-12 Thread Nick Bowler
Hello,

On 2014-11-11 22:07 +0200, fr33domlover wrote:
> On 2014-11-11
> Simon Richter  wrote:
> > On 11.11.2014 18:50, fr33domlover wrote:
> > 
> > > When I ran `make distcheck`, it failed because the HTML files don't get
> > > cleaned by `make distclean`. That makes sense, but specifically for my
> > > package this is not an error.
[...]
> Hmmm I think I didn't make myself clear. So just to make sure, before I cause
> confusion: The files are indeed distributed validaly. The "problem" is that
> they are not cleaned by `make distclean`. I intentionally made them clean only
> on `make maintainer-clean`. When `make distcheck` sees they don't get cleaned
> by `make distclean`, it produces an error.

I think your distribution tarballs are not working properly in VPATH
builds.  Distcheck has found the problem but it is maybe not reporting
it very well.  Here's a simplified view of how distcheck tests "make
distclean":

  - It unpacks the distribution tarball to be tested, and marks all
unpacked files read-only.

  - It runs configure in a separate, empty directory, i.e., srcdir
!= builddir.

  - After doing a bunch of other tests, it runs "make distclean".

  - Then, if the build directory is not empty, report an error.

What is probably happening is that in VPATH builds from your tarball,
your documentation is being rebuilt even though it was distributed.
This is leaving files behind in your build directory, which distcheck
is then complaining about.

Hope that helps,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Portable Use of Variables

2014-10-27 Thread Nick Bowler
On 2014-10-26 22:15 +0200, fr33domlover wrote:
> I'm a bit confused about all the expressive features and ways to use makefile
> variables, so just to be sure -
> 
> http://www.gnu.org/software/make/manual/html_node/Substitution-Refs.html
> 
> Are these uses of variables portable, or should a portable Makefile.am use 
> only
> the plain $(var) form without the tricks?

The first form of expansion on that page, $(var:.a=.b), should be OK.
They are standard in POSIX and work on all make implementations that I
know of.

The version with % characters is not portable.

Hope that helps,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: nodist_noinst_SCRIPTS and `make distcheck`

2014-10-20 Thread Nick Bowler
On 2014-10-20 20:25 +0300, fr33domlover wrote:
> > There are two related things that distcheck is testing here, and either
> > one of them may be tripping you up.
> > 
> > First, distcheck is checking that users can run "make dist" from your
> > tarball.
> 
> Indeed `make dist` succeeds.
> 
> > Second, distcheck is checking that all this works properly in VPATH
> > builds (i.e., with srcdir != builddir).
> 
> I didn't try, but I assume it will work because the only problem is that
> script, which IS present in the right place. The Makefile.am makes sure it 
> will
> be taken from $srcdir. The problem happens because `make distcheck` copies
> files from the source repo into a new temporary srcdir.

Not exactly.  Distcheck first creates a distribution tarball (i.e.,
make dist), THEN it unpacks the tarball into a temporary srcdir and
tests that.  In other words, distcheck is directly testing the 'user
experience' when they unpack a tarball you publish.

Part of that user experience is that the following sequence should work:

  - download your package tarball from a website.
  - unpack it
  - ./configure && make dist

Since your script is not distributed, that sequence must not require the
script to work.

[...]
> It's going to be the first release, so I didn't try distcheck until now.
> 
> The line which generates the ChangeLog in the snippet above requires that 
> script
> to be present in $srcdir - but distcheck doesn't copy it to its temporary
> srcdir, so it's not present.

You should not need to copy the script at all, as it should be run at
'make dist' time or earlier.  This will happen directly in your VCS-
controlled srcdir, before distcheck unpacks the tarball to test.

> The solution to the problem of .git not being present during distcheck may be:
> In distcheck-hook, take it from the right place (i.e. $(srcdir)/../.git) and
> then try to make the ChangeLog again. The thing about the script is, that 
> while
> I can do the same (pick the script from the original $srcdir), it would be
> somewhat wrong design-wise - if a script is used for `make dist`, then `make
> distcheck` should copy it into the temporary srcdir just like the source code.
> While not installed nor distributed, the script is *used* during the process
> and is therefore required.

I'm not sure I understand.

When "make distcheck" tests the distribution, it tries to run "make
dist" from the tarball.  This must succeed *without* running your
script at all, as the git history will not be available in this
scenario.  So when building from the tarball, your distribution should
copy the already-included ChangeLog instead of generating a new one.

As I mentioned, one solution is to use a dist-hook to generate the
ChangeLog only when certain conditions (i.e., building from source
control) are met.

There are other possibile solutions: for example, Automake generates
ChangeLog in builddir using a phony target.  Personally, I think a
dist-hook is simpler.

> How do I tell `make distcheck` to do that? I did try to have the
> ChangeLog target depend on the script file, but then instead of being
> satisfied by it (since the script file exists), it complains there's
> no target for it (but none needed since it's a file and it already
> exists).

You can't have ChangeLog depend on your script because the script is
not distributed, so this dependency can never be satisfied in the
tarball.

> Is there a solution for this in automake?

Yes, use a dist-hook.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: nodist_noinst_SCRIPTS and `make distcheck`

2014-10-20 Thread Nick Bowler
On 2014-10-20 17:51 +0300, fr33domlover wrote:
> I have a script in my project, which creates the ChangeLog from the git log
> (it's the script from gnulib). Since the script is meant only for `make dist`,
> it's neither distributed nor installed. I didn't put it in any variable, not
> even nodist_noinst_SCRIPTS. Everything seemed to work, including `make dist`.

> Now I tried `make distcheck` for the first time. It fails because it cannot
> find the script. When it copies the srcdir content into a new temporary place,
> it simply "forgets" to take the script, maybe because there are no targets for
> it at all.
[snip details].

There are two related things that distcheck is testing here, and either
one of them may be tripping you up.

First, distcheck is checking that users can run "make dist" from your
tarball.

Second, distcheck is checking that all this works properly in VPATH
builds (i.e., with srcdir != builddir).

For the first point, in principle it is OK to have ChangeLog generated
automatically from your VCS.  But you need to be careful that the
distributed ChangeLog's prerequisites are present and that it is
up-to-date, so that "make dist" from the tarball does not attempt to
re-generate it (since obviously this process will not function).  One
way to solve this is to sidestep it entirely with a dist-hook, which can
test if you are building from VCS, then generate the ChangeLog as
appropriate.  Something like this (untested):

  dist-hook: dist-create-changelog
  dist-create-changelog:
if test -d "$(srcdir)/.git"; then \
  : generate "$(distdir)/ChangeLog" here; \
fi
  .PHONY: dist-create-changelog

Here we rely on the fact that Automake will automagically include
ChangeLog in the tarball if it is present in srcdir.  You may want to
also add a distcheck-hook to ensure that this actually happens.

For the second point, if you are not routinely testing VPATH builds
from your git tree at all, just be aware that distcheck may uncover bugs
related to this feature.

Hope that helps,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Recompiling all sources when the makefile changes

2014-10-09 Thread Nick Bowler
On 2014-10-09 20:07 +0100, R. Diez wrote:
> > If "configure" is changing something, an easy and reliable option is
> > to ensure that it changes config.h (or some other configuration
> > header), which will naturally cause a rebuild of files that include
> > the header.
> 
> This is not as straightforward as it sounds. My project currently does
> not even have a config.h file, and even if it had one, there is no
> reliable way to make sure that all sources end up including that file.

Changing an existing project to use AC_CONFIG_HEADERS when it does not
currently do so is a pretty big change, yeah, because you must include
config.h in every source file.

It's something you might want to look into anyway, as it can avoid
problems with DEFS exceeding command-line length limits.

> It may also be a different compiler flag that has no effect on the
> config.h file whatsoever. It is safer if all the object files depend
> on the makefile.

While it's your project and you can do this if you want, making
all objects depend on the makefile sounds like a really silly idea.
Nobody wants to spend 30 minutes recompiling because they added one
source file to a library, or because they added an additional test
case.

I suggest choosing some specific features to test, and make things depend
on THAT.  For example, you could rebuild if CFLAGS are changed by storing
the previous setting of CFLAGS in a file, update that file if the current
setting differs, then add prerequisites on THAT file.

> > It's not documented in the manual, but there are automake variables
> > that can help you add prerequisites to your object files.  They have
> > the form mumble_OBJECTS and contain the list of object files for a
> > particular program or library, corresponding to mumble_SOURCES.
> 
> Thanks, that looks promising, I'll give it a go.
> 
> The fact that is not documented makes me worry that it may change in
> the future without further notice.

Sure, always something to consider before using undocumented functionality.
For what it's worth, I use mumble_OBJECTS to add extra prerequisites to
objects.  It's too convenient to pass up.

> > The object filenames are reliable as long as you don't change any
> > settings (like subdir-objects, but also other things).  So it is
> > safer to use mumble_OBJECTS in my opinion.
> 
> I am not sure what you mean here. Do you mean that, if I turn on
> subdir-objects or some other option, then mumble_OBJECTS may not work
> any more?

No, mumble_OBJECTS will work correctly regardless of the actual object
names, which is why I suggested it is safer than specifying object
filenames directly.

If you wrote some object filename explicitly in your Makefile.am, it may
be out of date if you, for example, added per-target CFLAGS or enabled
subdir-objects.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Recompiling all sources when the makefile changes

2014-10-09 Thread Nick Bowler
On 2014-10-09 14:53 +0100, R. Diez wrote:
[...]
> I noticed that, if I change the version number in the top-level
> configure.ac by amending the call to AC_INIT(), running "make" in the
> build directory automatically regenerates the 'configure' script and
> re-runs it. However, the C++ source files are not rebuilt, because
> everything is up to date. So it looks like the Autotools get the job
> only partially done.

If "configure" is changing something, an easy and reliable option is to
ensure that it changes config.h (or some other configuration header),
which will naturally cause a rebuild of files that include the header.

[...]
> I could try to make all object files depend on the makefile, which may
> not be completely accurate if the makefile includes other files, but
> it would probably be enough. However, I did not find a way to add an
> extra dependency to all .o files.

It's not documented in the manual, but there are automake variables
that can help you add prerequisites to your object files.  They have
the form mumble_OBJECTS and contain the list of object files for a
particular program or library, corresponding to mumble_SOURCES.

So you can write something like:

  bin_PROGRAMS = mumble
  $(mumble_OBJECTS): extra-prerequisites

I don't think there is any automatically-defined variable that contains
ALL objects.

> I could try to manually guess the .o filenames from the .cpp
> filenames, but those .o filenames may change in the future,
> especially in the face of subdir-objects.

The object filenames are reliable as long as you don't change any
settings (like subdir-objects, but also other things).  So it is
safer to use mumble_OBJECTS in my opinion.

Hope that helps,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Keeping Makefile.am in sync with Git repository

2014-09-19 Thread Nick Bowler
On 2014-09-19 11:23 -0700, Jack Bates wrote:
[...]
> Now if a file is removed from the repository but not from a Makefile.am 
> the script rewrites the Makefile.am, dropping the file from it as well.
> If a file is added to the repository the script also adds it to the 
> appropriate Makefile.am's EXTRA_DIST variable. This isn't always the 
> correct edit: Sometimes it should get added to a different primary or 
> sometimes the file isn't to be distributed. The script takes a 
> configurable list of files which aren't distributed. On one hand we now 
> have another list of files to keep up-to-date, just shifting the 
> problem, but we're more commonly adding/removing files that are 
> distributed, so the list of files which aren't changes less frequently 
> than the Makefile.am-s. Also the script hopefully gets the developer's 
> attention whereupon she can intervene with a better edit, whereas 
> without it, forgetting to distribute a new file can sometimes go 
> unnoticed for a long time.

In most cases[1], a file that is outright missing from the distribution
can be immediately caught by running "make distcheck".  Additional test
cases can be added using distcheck-hook.  Make distcheck part of your
test plan.

That being said, if the script is helpful for you, by all means use it!

[1] Notable exception: files which need to be distributed but don't get
installed or used by the build process; files like READMEs, change
    logs, etc.  Distcheck will normally NOT notice if such files are
missing.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



Re: Setting ACLOCAL_AMFLAGS with ':=' vs '='

2014-09-18 Thread Nick Bowler
On 2014-09-18 09:36 +0100, R. Diez wrote:
> If I add this line to my Makefile.am (and I make sure that the 'm4'
> subdir is created beforehand), then it works as intended:
> 
>   ACLOCAL_AMFLAGS = -I m4
> 
> However, if I use this syntax:
> 
>   ACLOCAL_AMFLAGS := -I m4
> 
> Then I get the following warning:
> 
>   libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.

This may be a slight bug in libtoolize, which is not part of Automake.
I have added the libtool list to Cc.  If libtoolize is still properly
copying its macros into your m4 directory then I would ignore the
warning.

But two things to consider:

 (1) Automake is designed to produce makefiles which are portable
 in practice (i.e., run on a variety of make implementations).
 Use of := assignments fails on heirloom make, for example,
 and probably other implementations.

 (2) There is no functional difference between "=" and ":=" if the
 right hand side does not contain any variable references (as in
 your example).

As an alternative, with recent versions of Automake you can try using
AC_CONFIG_MACRO_DIRS in configure.ac rather than setting m4 directories
in Makefile.am.  I'm not sure if all the tooling has been updated to
fully handle this new feature yet, though.

[snip description of GNU make semantics]
> That flavor is now a POSIX standard (with syntax "::="), so it should
> be portable too (at least in the future).

Well no, it doesn't follow that := assignments will become portable in
the future just because POSIX standardized ::= syntax.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)



  1   2   >