Re: portability of xargs
On Tue, 15 Feb 2022, Jan Engelhardt wrote: On Tuesday 2022-02-15 07:16, Daniel Herring wrote: Maybe a next-generation configuration tool should start by defining interfaces for user interactions and build tools. This would allow CLI and easy GUI and IDE users, integration with multiple build systems, static and dynamic probing, ... Obligatory https://xkcd.com/927/ response. :) Not necessarily define entirely NEW interfaces, just recognize where newer tech like JSON (?) could be applied to improve the functionality of older tech like Autotools. Something like the Language Server Protocol, but for build configuration. In fact, this has the potential to greatly reduce the dependencies for a cross-compile target. You could treat it as a very dumb terminal and push all the logic to the host a la Expect / Ansible. Don't even need a POSIX shell... For all its stupid connectors and cables, I much prefer USB to what it displaced many years ago. Similar story with Git. Running Systemd but don't fully trust it yet. -- Daniel
Re: portability of xargs
Hi Mike, It often makes sense to change a project that uses Autotools rather than modifying the Autotools. Can this argument-splitting behavior be documented as expected? Are no workarounds available? Replacing "overly fancy but proven shell script" with "dependency on a new tool" may replace an obscure problem for one project with an intractable problem that breaks portability (to obscure platforms) for all projects. I have distant memories of working through similar problems by tracing configure scripts and makefiles, redirecting the problematic command lines to files, and monkey-patching them before execution. Tedious but doable due to the simplicity of the tooling, even in a cross compile. Getting a "functional" xargs would have been MUCH harder in that environment. IMO, you are touching on the fundamental issue of Autotools. Portable shell scripting is one of the worst programming environments available today, and it hinders progress and adoption, but it is the mostly widely available bootstrapping environment available. Autotools embodies decades of hard-won experience, and people rightly want to use it on new projects, but they are weighed down by all the baggage of history, even when it is not even remotely relevant. As the years go by, this tradeoff is getting less justifiable. I hope a better bootstrapping environment will come along. Something that ports the spirit of Autotools to a much better language. Maybe a subset of Python that can bootstrap from shell? Many would argue for cmake or one of the newer build systems, but they all seem to miss one or more fundamentals of the Autotools. At a minimum, they should address the concerns about imake in the Autoconf FAQ. ;) https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.71/html_node/Why-Not-Imake.html Maybe a next-generation configuration tool should start by defining interfaces for user interactions and build tools. This would allow CLI and easy GUI and IDE users, integration with multiple build systems, static and dynamic probing, ... -- Daniel On Mon, 14 Feb 2022, Mike Frysinger wrote: context: https://bugs.gnu.org/53340 how portable is xargs ? like, beyond POSIX, as autoconf & automake both support non-POSIX compliant systems. i want to use it in its simplest form: `echo $var | xargs rm -f`. automake jumps through some hoops to try and limit the length of generated command lines, like deleting output objects in a non-recursive build. it's not perfect -- it breaks arguments up into 40 at a time (akin to xargs -n40) and assumes that it won't have 40 paths with long enough names to exceed the command line length. it also has some logic where it's deleting paths by globs, but the process to partition the file list into groups of 40 happens before the glob is expanded, so there are cases where it's 40 globs that can expand into many many more files and then exceed the command line length. if i can assume `xargs` is available, then i think i can replace all of this custom logic with a simple call to that. -mike
Re: Automake for RISC-V
Hi Billa, It sounds like you are trying to build from a source repo instead of a release tarball. Building with automake or autoconf is a multi-step process. Run automake on a ".am" file to generate a makefile. Run make on the makefile to build the executable. Automake has a special "make dist" rule. The dist rule uses automake on a host computer to produce a tarball. This dist tarball requires make and a compiler, but it does not require automake. -- Daniel On Sun, 21 Nov 2021, Billa Surendra wrote: On Sun, 21 Nov, 2021, 2:28 am Nick Bowler, wrote: On 20/11/2021, Billa Surendra wrote: I have RISC-V native compiler on target image, but when I am compiling automake on target image it needs automake on target. This is the main problem. Automake should not be required to install automake if you are using a released version and have not modified the build system. Could you please explain more, What is the released version ? . Modified build system means ? I have created my RISC-V Linux target system on x86, now I am compiling automake on target system only with native compiler. In the same way I am trying to install texinfo on target image it also need automake on target. Likewise, texinfo should also not require automake to install. Cheers, Nick
Re: Automake testsuite misuses DejaGnu
Hi Jacob, It seems fragile for DejaGnu to probe for a testsuite directory and change its behavior as you describe. For example, I could have a project without the testsuite dir, invoke the tester, and have it find and run some unrelated files in the parent directory. Unexpected behavior (chaos) may ensue. Is there an explicit command-line argument that could be added to the Automake invocation? -- Daniel On Mon, 12 Jul 2021, Jacob Bachmeyer wrote: Karl Berry wrote: DejaGnu has always required a DejaGnu testsuite to be rooted at a "testsuite" directory If something was stated in the documentation, but not enforced by the code, hardly surprising that "non-conformance" is widespread. It is not widespread -- all of the toolchain packages correctly place their testsuites in testsuite/ directories. As far as I know, the Automake tests are the only outlier. Anyway, it seems like an unfriendly requirement for users. And even more to incompatibly enforce something now that has not been enforced for previous decades. Why? (Just wondering.) -k Previous versions of DejaGnu did not properly handle non-recursive make with Automake-produced makefiles. Beginning with 1.6.3, the testsuite is allowed to be in ${srcdir}/testsuite instead of ${srcdir} exactly. Enforcing the long-documented (and mostly followed) requirement that there be a directory named "testsuite" containing the testsuite allows DejaGnu to resolve the ambiguity and determine if it has been invoked at package top-level or in the testsuite/ directory directly. Even in 1.6.3, there was intent to continue to allow the broken cases to work with a warning, but I made the conditional for that case too narrow (oops!) and some of the Automake test cases fail as a result. Fixing this now is appropriate because no one is going to see the future deprecation warnings due to the way Automake tests are run. -- Jacob
Re: Relative path without realpath
I would use or copy the definition in GNU Autoconf. Portable and battle tested. You can dig through their macros or look in a configure script to see how they generate abs_builddir, abs_srcdir, abs_top_builddir, ... - Daniel On Fri, 9 Oct 2020, Cristian Morales Vega wrote: I would like to send some patches making pkg-config files relocatable. pkg-config offers the "pcfiledir" variable to do this. The thing is that I need to be able to obtain prefix, libdir, includedir, etc. relative to the directory where the .pc file gets installed. I can easily do this with realpath, from coreutils, but it's not POSIX and I'm not sure people will accept such a dependency. The use of automake per se has no dependency to coreutils, does it? There is any way to obtain the path of one directory relative to another in automake without adding a new dependency? "realpath" implemented as a m4 macro, maybe? Thanks.
Re: Proposal Accepted for GSoC
+1 for Diab's book and parser recommendations. Here are two more links to them. http://modernperlbooks.com/books/modern_perl_2014/07-object-oriented-perl.html https://jeffreykegler.github.io/Marpa-web-site/ If you like historical perspective, then here's an interesting read. https://jeffreykegler.github.io/personal/timeline_v3 If you have trouble with grammars, then ANTLR has some good tools but unfortunately no Perl support. http://www.antlr.org/tools.html - Daniel
TAP colors
Hi, Regarding the topic of colored TAP output, it seems that there are good reasons for having both plain-text and colored-text output. There are also reasons for people to customize the colors. Here are two ideas that might help. For all I know, the code may already be structured this way, and we just have a small documentation issue to improve awareness of the features. 1) Would it be hard to support both, perhaps with separate targets like tap and tap-color? Then people can easily select plain and colored text according to preference or present need. 2) Would it be possible to split out the color escape sequences into Autoconf site configuration variables? Named according to semantics, not the actual color, something like TAP_COLOR_PRE_WARN and TAP_COLOR_POST_WARN. Then you could provide configurations for dark ANSI background, light ANSI background, and people with non-ANSI terminals could do whatever their terminal requires. Implementing #2 might also simplify #1. Plain text has empty escape sequences... Later, Daniel
Re: More control over 'make dist'
Hi Michal, I've always handled this by distributing the single real file, along with a script that creates missing symlinks later, either during configure or as part of one of the make targets. Its a bit quirky, but not too annoying. Since the dist target is for making source packages, there are plenty of natural places to hide this. Many related pieces of infrastructure, including Debian packages, have the same basic symlink packing issue and workaround. - Daniel On Wed, 14 Sep 2016, Michal Privoznik wrote: Dear list, I'm a libvirt devel and I've ran into interesting problem. I'd like to hear your opinions on it. Libvirt is a virtualization library that uses XML to store a virtual machine config. We have couple of tests in our repository too that check whether XML configs are valid, and whether some operations change it in expected way. Some operations don't change the XML at all, in which case we just symlink the output file to point to the input file: ln -s $xml.in $xml.out However, I was looking into archive produced by 'make dist' the other day and found out that the symlinks are not preserved. I've traced down the problem and found that autoconf is just hardcoding some of tar's options. Namely -chf. Yes, it is -h that causes a symlink to be dereferenced. So my question is, what do you think of making -h configurable? We could add new tar-* option to AM_INIT_AUTOMAKE, say tar-symlinks, which would suppress -h on tar's command line. What are your thoughts? Michal
Re: bug#13578: [IMPORTANT] A new versioning scheme for automake releases, and a new branching scheme for the Git repository
I like the -alpha/-beta/-rcN markings. As someone who often tracks cutting-edge stuff, it is nice to have a clear indicator of how stable the author things something is. This info helps the user do a quick cost/benefit estimate when deciding what version to use today. - Daniel
Re: [IMPORTANT] A new versioning scheme for automake releases, and a new branching scheme for the Git repository
On Mon, 28 Jan 2013, Stefano Lattarini wrote: Feedback, opinions, objections? There was a lot to read, and I confess to not giving it full justice. Others have already extolled the virtues of backwards compatibility. Regarding some "why" questions, here's the manual entry on how versioning is used today. http://www.gnu.org/software/automake/manual/html_node/API-Versioning.html As far as I can tell, the current proposal boils down to "bump the major version more often". That can work, but I'm not quite sold on it. I like when a major bump means "wake up and reread the docs before upgrading" and minor bumps mean "upgrade casually". (and aggregate changes to minimize major bumps and make them more worthwhile) Here are a couple "standards" for versioning. http://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html http://semver.org/ The general principle being A.B.C decodes to something easy like A = new API, breaking changes requiring human intervention B = same API, maybe with added options, mostly just recompile C = just bugfixes, documentation tweaks, packaging, etc. For example, a deprecation warning bumps B, but actually removing the deprecated functionality bumps A. As for branching, if every release is tagged, and you want to apply a bugfix to release A.B.C, why not just create a maint-A.B branch or the like? Delay creating branches until they are needed. I wasn't seeing the need for the complex branching details. I agree its nearing time for a "2.0" release; there has been talk of removing several now-deprecated functions and making other major changes (e.g. parallel testing). Would it be possible to start collecting these into a preview/beta release and leave the "1.0" series with its current API and behaviors? Just a thought. I've done the build system for several projects I'm no longer associated with. It would be nice if the people who inherited them don't have to rework them for a few more years. A major version change (again, small numbers) might motivate distributions to keep both around for a while. Hope that helps, Daniel
Re: Improvements to "dist" targets (was: Re: EXTRA_DIST, directories, tar --exclude-vcs)
On Tue, 1 Jan 2013, Stefano Lattarini wrote: OTOH, what about distribution "tarballs" in '.zip' format? They don't use tar at all ... Time to deprecate them maybe? Is anybody actually using them? And while at it, what about the even more obscure 'shar' format? While I haven't manipulated a shar file in years, but zip is still the dominant archive format on MS platforms. It is quite common (and a good practice) for a project to distribute \n newlines in a tarball and \r\n newlines in a zip archive. - Daniel
Re: IF condition inside Makefile.am
On Mon, 29 Oct 2012, The_Jav wrote: I would like to create an if condition. But it works partially : libmy_lib_SOURCES = source.c if HAVE_IPP libmy_lib_SOURCES += ipp.c endif when the HAVE_IPP is false, the makefile created is different than a makefile created with only "libmy_lib_SOURCES = sources.c" why ? There are two reasons I can think of. First, there is a fundamental difference; with the IF, Automake knows it needs to distribute ipp.c. Second, there is an accidental difference; IF is implemented by configure substitutions, and this works by selectively commenting out some lines in the Makefile. In both cases (without IF or without ipp.c), normal compilation should be the same. Are you seeing any other behavior changes as well? - Daniel
Re: What is it with the dependency on config.status
On Sun, 1 Jan 2012, Stefano Lattarini wrote: On 12/31/2011 12:29 AM, Jan Engelhardt wrote: I have seen user-induced lines in Makefile.am like these in a handful of packages: ... ${pkgconfig_DATA}: ${top_builddir}/config.status Given that automake 1.11.1/autoconf 2.68 seem to automatically recreate files specified in AC_CONFIG_FILES when configure.ac is changed, what is the actual purpose/significance of this line? The best way to know would probably be asking the person(s) that have written such a line :-) My initial "knee-jerk" guess was that that line was needed for older automake versions, but by looking into the automake history, I've seen that the automatic remake rules for files depending on config.status have been around in a form or another since February 1996 (i.e., pre 1.0). So I don't honestly know what the rationale for that line is. I've occasionally been guilty of worse abuses of the autotools... Take a look elsewhere in the makefile. Does the author have the makefile generating one of the files in pkgconfig_DATA? If all the files in this variable are listed in AC_CONFIG_FILES, then this rule probably is unnecessary. If some of the files are gerated by a makefile, then this rule is important. - Daniel
Re: Using variables in the definition of SUBDIRS
On Mon, 15 Aug 2011, Misha Aizatulin wrote: I would like to use variables in the definition of SUBDIRS, something of the kind: DEPS = $(TOP)/deps CIL = $(DEPS)/cil-1.3.7 SUBDIRS = $(CIL) ... Would one of these help? http://www.gnu.org/s/libtool/manual/automake/Subdirectories-with-AM_005fCONDITIONAL.html http://www.gnu.org/s/libtool/manual/automake/Subdirectories-with-AC_005fSUBST.html http://www.gnu.org/s/hello/manual/automake/Usage-of-Conditionals.html - Daniel
Re: How to use alternative autoconf than what is located in /usr/local/bin (for automake compile)?
On Mon, 23 May 2011, Daniel Herring wrote: FYI, when I install a new version of the autotools, I do something like # PATH=/new/path/bin:$PATH # cd m4; ./configure --prefix=/new/path; make; make install # cd ../autoconf; ./configure --prefix=/new/path; make; make install # cd ../automake; ./configure --prefix=/new/path; make; make install # cd ../libtool; ./configure --prefix=/new/path $OSXOPTS; make; make install Oh, one other detail. It is often a good idea to tell the new aclocal where to find m4 files installed in the normal locations. This involves creating a dirlist file. http://www.gnu.org/s/hello/manual/automake/Macro-Search-Path.html The proper versions of everything are found because they are first in PATH. The OSXOPTS for libtool is needed because Apple ships another program named libtool. GNU libtool is renamed to glibtool (IIRC). There are a couple ways of effecting this. IIRC, I use a setting that prefixes the program with a g (something like --program-prefix=g). Later, Daniel
Re: How to use alternative autoconf than what is located in /usr/local/bin (for automake compile)?
FYI, when I install a new version of the autotools, I do something like # PATH=/new/path/bin:$PATH # cd m4; ./configure --prefix=/new/path; make; make install # cd ../autoconf; ./configure --prefix=/new/path; make; make install # cd ../automake; ./configure --prefix=/new/path; make; make install # cd ../libtool; ./configure --prefix=/new/path $OSXOPTS; make; make install The proper versions of everything are found because they are first in PATH. The OSXOPTS for libtool is needed because Apple ships another program named libtool. GNU libtool is renamed to glibtool (IIRC). There are a couple ways of effecting this. IIRC, I use a setting that prefixes the program with a g (something like --program-prefix=g). Later, Daniel
Re: absolute or relative path for source files
On Sun, 15 May 2011, Vincent Torri wrote: in a Makefile.am that is in the "$(top_builddir)/path1/path3" directory, the library uses a file that is in the "$(top_builddir)/path1/path2" directory. Is it better to use absolute path: mylib_SOURCES = $(top_builddir)/path1/path2/file.c or relative path mylib_SOURCES = ../path2/file.c ? Technically, both are relative (and defined by autoconf IIRC). Use abs_top_builddir for an absolute path. Note that builddir should only be used to point to sources actually created by the build (srcdir for the others). Not sure what the official story is. I treat these as any other absolute/relative path issue. Absolute paths break if the target moves, relative paths break if either target moves relative to the other. I generally prefer the shorter form, also preferring absolute paths if they appear in several directories. The variable dereference can also be preferrable when including another makefile; it prevents automake from doing the include early. - Daniel
Re: A way to specify a host CC during cross compile?
On Tue, 5 Apr 2011, Dave Hart wrote: On Mon, Apr 4, 2011 at 22:15 PM, Martin Hicks wrote: Is there a way to specify a different compiler for compile-time helper programs that are used during the build of a package? I think the short answer is no. I have some source files in a package which are built by running a C program, which would be a use case for $HOSTCC-type functionality. I use a cross-compiling AM_CONDITIONAL to disable the rules for updating those generated sources, and I distribute them. The result is changes to the sources of the generated sources must be built in a non-cross-compile environment, then the up-to-date generated sources can be transported (such as via make dist, or committing to SCM) to the cross-build environment. Similar story here. HOSTCC is rarely the whole story. You may need to split the project into two parts, one for the build machine and one for the executable host/target. proj/configure.ac # for host machine proj/tools/configure.ac # for build machine By default, the top build system recursively configures and builds the tools. For a cross-compile, the tools can be configured separately. My systems use --with-tools= to specify this situation. Make does allow you to specify target-specific variables; so that might be easier in some cases. I have also done CXX=@SomeCXX@ to override the default in a single makefile... - Daniel
Re: [GSoC Proposal] automake - Interfacing with a test protocol like TAP or subunit
On Sun, 20 Mar 2011, Ralf Wildenhues wrote: Or add a subunit parser and a quick tap2subunit perl module today and have the best of both worlds? (This is meant as an honest question, even if it looks like a rhetoric one.) I think that's a good approach. I don't really use either framework, but when somebody pointed me to TAP a while back and I read the protocol ... I came away unimpressed. The subunit protocol does seem a bit better layed out. My impression is that TAP is a strict subset of subunit. The only reason to favor TAP is if it has better or more portable tools for consuming the data. Both protocols can be generated with equal ease (assuming the common subset is used). Would it be possible for subunit to drop binary data into local files and report their paths rather than embed the data in the stream? - Daniel
Re: PKG_CHECK_MODULES on system without pkg-config installed?
On Fri, 11 Mar 2011, Ralf Wildenhues wrote: * Roger Leigh wrote on Thu, Mar 10, 2011 at 01:03:16PM CET: This is not meant to sound like a troll, but: is anyone really *really* using static linking in 2011? I'd love to answer no, but at least parts of the HPC crowd will do almost anything to get a couple percent more performance out of their code plus libraries. Static linking is a real low-hanging fruit there. Static linking is also nice for self-contained executables. All sorts of benefits including ease of distribution and not worrying about libraries crossing when old versions are kept for testing. I've worked on several projects where we've intentionally reverted to static linking for such reasons. They were medium-sized codebases with several libraries and executables, generally deployed to a couple dozen machines. One project in particular required that we keep all deployed versions in working order on a test server for easy use. Speed and space weren't issues. Simple correctness was. - Daniel P.S. pkg-config suffers many of the same issues as imake, but in generally less bad ways. I often need details not forseen by the pkg-config writer, and it is frustrating trying to keep multiple pkg-config environments for similar but different builds.
Automake GSoC idea
Refactor Automake so it can easily be extended for new file types. Examples: - Qt preprocessed files (e.g. moc and linguist generate sources to link) - IDL files (may generate headers and several sources to link) - NVidia's CUDA Automake has many rules baked in; there is no clear extension mechanism. The above examples can be handled by a collusion of AC_CONFIG_COMMANDS, sed, makefile includes, and recursive make invocations; but its not pretty. Autoconf provides an extension framework via m4. Automake needs something similar. This would probably be an API for looking at declared source lists, outputting makefile snippets, reusing dependency-generation templates, etc. A process for automake to output templates to be expanded by configure and later included would also be nice. Later, Daniel P.S. I'm employed, but not with enough free time to mentor this. I could help by providing case studies of tasks that aren't easy with automake.
Re: AVRDUDE-5.8
On Sun, 27 Sep 2009, Ty Tower wrote: I have avrdude 5.5 installed on my machine running Mandriva 2009 I have been trying to installĀ 5.8 and get this error below which seems to be the ylwrap script but it is too complex for me to sort Can someone tell me if this is a bug or my error ... Wrong bug tracker. Someone here might know, but you should try the mailing lists and bugs tracker listed at the top of http://savannah.nongnu.org/projects/avrdude/ - Daniel